My colleague Eli Dourado brought to my attention this XKCD comic and when tweeting it out yesterday he made the comment that “Half of tech policy is dealing with these people”:
The comic and Eli’s comment may bit a bit snarky, but something about it rang true to me because while conducting research on the impact of new information technologies on society I often come across books, columns, blog posts, editorials, and tweets that can basically be summed up with the line from that comic: “we should stop to consider the consequences of [this new technology] before we …” Or, equally common is the line: “we need to have a conversation about [this new technology] before we…”
But what does that really mean? Certainly “having a conversation” about the impact of a new technology on society is important. But what is the nature of that “conversation”? How is it conducted? How do we know when it is going on or when it is over? Continue reading →
The International Association of Privacy Professionals (IAPP) has been running some terrific guest essays on its Privacy Perspectives blog lately. (I was honored to be asked to submit an essay to the site a few weeks ago about the ongoing Do Not Track debate.) Today, the IAPP has published one of the most interesting essays on the so-called “right to be forgotten” that I have ever read. (Disclosure: We’ve written a lot here about this issue here in the past and have been highly skeptical regarding both the sensibility and practicality of the notion. See my Forbes column, “Erasing Our Past on the Internet,” for a concise critique.)
In her fascinating and important IAPP guest essay, archivist Cherri-Ann Beckles asks, ”Will the Right To Be Forgotten Lead to a Society That Was Forgotten?” Beckles, who is Assistant Archivist at the University of the West Indies, powerfully explains the importance of archiving history and warns about the pitfalls of trying to censor history through a “right to be forgotten” regulatory scheme. She notes that archives “protect individuals and society as a whole by ensuring there is evidence of accountability in individual and/or collective actions on a long-term basis. The erasure of such data may have a crippling effect on the advancement of a society as it relates to the knowledge required to move forward.”
She concludes by arguing that:
From the preservation of writings on the great pharaohs to the world’s greatest thinkers and inventors as well as the ordinary man and woman, archivists recognise that without the actions and ideas of people, both individually and collectively, life would be meaningless. Society only benefits from the actions and ideas of people when they are recorded, preserved for posterity and made available. Consequently, the “right to be forgotten” if not properly executed, may lead to “the society that was forgotten.”
Importantly, Beckles also stresses the importance of individual responsibility and taking steps to be cautious about the digital footprints they leave online. “More attention should instead be paid to educating individuals to ensure that the record they create on themselves is one they wish to be left behind,” she notes. “Control of data at the point of creation is far more manageable than trying to control data after records capture.”
That means that in most American neighborhoods, consumers are stuck with a broadband monopoly. And monopolies don’t strive to offer the best, cheapest service. Rather, they use speed as a tool to discriminate by price — coaxing consumers who are willing to pay for high-speed broadband into more costly and profitable tiers.
First, no matter how well-intentioned, restrictions on data collection could negatively impact the competitiveness of America’s digital economy, as well as consumer choice.
Second, it is unwise to place too much faith in any single, silver-bullet solution to privacy,including “Do Not Track,” because such schemes are easily evaded or defeated and often fail to live up to their billing.
Finally, with those two points in mind, we should look to alternative and less costly approaches to protecting privacy that rely on education, empowerment, and targeted enforcement of existing laws. Serious and lasting long-term privacy protection requires a layered, multifaceted approach incorporating many solutions.
The testimony also contains 4 appendices elaborating on some of these themes.
Remember all the businesses, internet techies and NGOs who were screaming about an “ITU takeover of the Internet” a year ago? Where are they now? Because this time, we actually need them.
May 14 – 21 is Internet governance week in Geneva. We have declared it so because there will be three events in that week for the global community concerned with global internet governance. From 14-16 May the International Telecommunication Union (ITU) holds its World Telecommunication Policy Forum (WTPF). This year it is devoted to internet policy issues. With the polarizing results of the Dubai World Conference on International Telecommunications (WCIT) still reverberating, the meeting will revisit debates about the role of states in Internet governance. Next, on May 17 and 18, the Graduate Institute of International and Development Studies and the Global Internet Governance Academic Network (GigaNet) will hold an international workshop on The Global Governance of the Internet: Intergovernmentalism, Multi-stakeholderism and Networks. Here, academics and practitioners will engage in what should be a more intellectually substantive debate on modes and principles of global Internet governance.
Last but not least, the UN Internet Governance Forum will hold its semi-annual consultations to prepare the program and agenda for its next meeting in Bali, Indonesia. The IGF consultations are relevant because, to put it bluntly, it is the failure of the IGF to bring governments, the private sector and civil society together in a commonly agreed platform for policy development that is partly responsible for the continued tension between multistakeholder and intergovernmental institutions. Whether the IGF can get its act together and become more relevant is one of the key issues going forward.
The US Patent and Trademark office is starting to recognize that it has a software patent problem and is soliciting suggestions for how to improve software patent quality. A number of parties such as Google and EFF have filed comments.
I am on record against the idea patenting software at all. I think it is too difficult for programmers, as they are writing code, to constantly check to see if they are violating existing software patents, which are not, after all, easy to identify. Furthermore, any complex piece of software is likely to violate hundreds of patents owned by competitors, which makes license negotiation costly and not straightforward.
However, given that the abolition of software patents seems unlikely in the medium term, there are some good suggestions in the Google and EFF briefs. They both note that the software patents granted to date have been overbroad, equivalent to patenting headache medicine in general rather than patenting a particular molecule for use as a headache drug.
This argument highlights one significant problem with patent systems generally, that they depend on extremely high-quality review of patent applications to function effectively. If we’re going to have patents for software, or anything else, we need to take the review process seriously. Consequently, I would favor whatever increase in patent application fees is necessary to ensure that the quality of review is rock solid. Give USPTO the resources it needs to comply with existing patent law, which seems to preclude such overbroad patents. Simply applying patent law consistently would reduce some of the problems with software patents.
Higher fees would also function as a Pigovian tax on patenting, disincentivizing patent protection for minor innovations. This is desirable because the licensing cost of these minor innovations is likely to exceed the social benefits the patents generate, if any.
While it remains preferable to undertake major patent reform, many of the steps proposed by Google and EFF are good marginal policy improvements. I hope the USPTO considers these proposals carefully.
ARIN is the Internet numbers registry for the North American region. It likes to present itself as a paragon of multistakeholder governance and a staunch opponent of the International Telecommunication Union’s encroachments into Internet governance. Surely, if anyone wants to keep the ITU out of Internet addressing and routing policy, it would be ARIN. And conversely, in past years the ITU has sought to carve away some of the authority over IP addressing from ARIN and other RIRs.
But wait, what is this? March 15 the ITU Secretary-General released a preparatory report for the ITU’s World Telecommunications Policy Forum, which will take place in Geneva May 14-16. The report contains 6 Internet-related policy resolutions “to provide a basis for discussion …focusing on key issues on which it would be desirable to reach conclusions.” Draft Opinion #3 pertains to Internet addressing. Among other things, the draft resolves:
“that needs-based address allocation should continue to underpin IP address allocation, irrespective of whether they are IPv6 or IPv4, and in the case of IPv4, irrespective of whether they are legacy or allocated address space;
“that all IPv4 transactions be reported to the relevant RIRs, including transactions of legacy addresses that are not necessarily subject to the policies of the RIRs regarding transfers, as supported by the policies developed by the RIR communities;”
“that policies of inter-RIR transfer across all RIRs should ensure that such transfers are needs based and be common to all RIRs irrespective of the address space concerned.”
These policy positions thrust the ITU and its intergovernmental machinery directly into the realm of IP addressing policy. But that is quite predictable; the ITU has always wanted to do that. What is unusual about these resolutions is that they bear an uncanny resemblance to the policy positions currently advocated by ARIN and the U.S. Department of Commerce.
Today Reason has published my policy paper addressing privacy concerns created by search, social networking and Web-based e-commerce in general.
These web sites have been in regulatory crosshairs for some time, although Congress and the Federal Trade Commission have been hesitant to push forward with restrictive legislation such as “Do Not Track” and mandatory opt-in or top-down mandates such as the White House drafted “Privacy Bill of Rights.” An the U.S. seems unwilling to go to the lengths Europe is, contemplating such unworkable rules like demanding an “Internet eraser button”—a sort of online memory hole that would scrub any information about you that is accessible on the Web, even if it is part of the public record.
In my paper, It’s Not Personal: The Dangers of Misapplied Policies to Search, Social Media and Other Web Content, I discuss the difficulty of regulating personal disclosure because different people have different thresholds for privacy. We all know people who refuse to go on Facebook because they are wary of allowing too much information about themselves to circulate. Where it gets dicey is when authority figures take a paternalistic attitude and start deciding what information I will not be allowed to share, for what they claim is my own good.
Top down mandates really don’t work, mainly because popular attitudes are always in flux. Offer me 50 percent off on a hotel room, and I may be willing to tell you where I’m vacationing. Find me interesting books and movies, and I may be happy to let you know my favorite titles.
Instead, ground-up guidelines that arise as users become more comfortable with the medium, and sites work to establish trust, work better. True, Google and Facebook often push the envelope in trying to determine where user boundaries are, but pull back when run into user protest. And when the FTC took up Google’s and Facebook’s practices, while the agency shook a metaphorical finger at both companies’ aggressiveness, it assessed no fines or penalties, essentially finding that no consumer harm was done.
This course has been wise. The willingness of users to exchange information about themselves in return for value is an important element of e-commerce. It is worth considering some likely consequences if the government pushes too hard to prevent sites from gathering information about users.
We learned today that Robert M. McDowell, who has served as a Commissioner at the Federal Communications Commission for almost seven years, will be leaving the agency shortly. I’m sad to hear it. Commissioner McDowell has been a great champion of freedom across the board, from traditional communications and media reform to cutting-edge Internet policy issues. On one issue after another, fans of liberty could count on Rob McDowell to perfectly articulate and defend the pro-freedom position on high-tech policy matters whenever and wherever he wrote or spoke.
I can’t even begin to list all the things we’ve written here over the years at the TLF about McDowell and his excellent body of work while he served at the FCC, but a quick custom search of this blog yields dozens of columns all gushing with praise for the seemingly endless string of outstanding speeches and statements that he made since joining the agency in 2006. But I just want to highlight two of McDowell’s most eloquent speeches and strongly encourage you to go read or re-read them because they will inspire you to keep up the good fight to expand the sphere of liberty in this field: