As 2011 winds down, I thought I’d list a few year-end analytics for The Technology Liberation Front blog. In 2011, we had just under 400,000 visits from 332,000 unique visitors. That’s up from 312,000 visits and 247,000 unique visitors in 2010.If you prefer a pageviews metric then we had 514,000 pageviews and 444,000 unique pageviews in 2011, up from 420,000 and 361,000 respectively in 2010.
As far as the top posts of the year go, anything Bitcoin related proved to be link-bait magic with 4 of the top 10 posts (all by Jerry Brito) being about Bitcoin. But it was Ryan Radia’s post on how to protect your privacy on Facebook and the Net more generally that commanded the most hits in 2011 with almost 35,000 views. Congrats Ryan! Anyway, the year’s Top 10 list follows below and many thanks to all those who took the time to visit the TLF over the past year.
ICANN’s plan to open up the domain name space to new top level domains is scheduled to begin January 12, 2012. This long overdue implementation is the result of an open process that began in 2006. It would, in fact, be more realistic to say that the decision has been in the works 15 years; i.e., since early 1997. That is when demand for new top-level domain names, and the need for other policy decisions regarding the coordination of the domain name system, made it clear that a new institutional framework had to be created. ICANN was the progressive and innovative U.S. response to that need. It was created to become a nongovernmental, independent, truly global and representative policy development authority.
The result has been far from perfect, but human institutions never are. Over the past 15 years, every stakeholder with a serious interest in the issue of top level domains has had multiple opportunities to make their voice heard and to shape the policy. The resulting new gTLD policy reflects that diversity and complexity. From our point of view, it is too regulatory, too costly, and makes too many concessions to content regulators and trademark holders. But it will only get worse with delay. The existing compromise output that came out of the process paves the way for movement forward after a long period of artificial scarcity, opening up new business opportunities.
Just last week I was discussing the terrifically interesting work of Michael Sacasas who pens The Frailest Thing, a poetic blog about technology and culture. [see: "Information Revolutions & Cultural / Economic Tradeoffs"] I highly recommend you follow his blog even if you struggle to keep up with his brilliance, as I often do. He posted another great essay today entitled, “Nostalgia: The Third Wave,” in which he discusses the work of the late social critic Christopher Lasch and his work on memory and nostalgia. Go read the entire thing since I cannot possible do it justice here. Anyway, I posted a short comment over there that I thought I would just republish here in case others are interested. I find the issue of nostalgia to be quite interesting.
Michael… I’m currently finishing up a paper looking at the causes of various “techno-panics” over time. I try to group together a variety of theories and possible explanations, one of which is labeled “Hyper-Nostalgia, Pessimistic Bias & Soft Ludditism.” I don’t go into anywhere near the detail you do here, but I did unearth a number of interesting things while conducting research.
Have you ever come across the book On Longing: Narratives of the Miniature, the Gigantic, the Souvenir, the Collection, by the poet Susan Stewart? She notes that what is ironic about nostalgia is that it is rooted in something typically unknown by the proponent. Consequently, she argues that nostalgia represents “a sadness without an object, a sadness which creates a longing that of necessity is inauthentic because it does not take part in lived experience. Rather, it remains behind and before that experience.” Too often, Stewart observes, “nostalgia wears a distinctly utopian face” and thus becomes a “social disease.”
That’s probably a bit extreme, but it does help explain why some intellectuals, social critics, and policymakers occasionally demonize new mediums, technologies, or forms of culture. If one if suffering from a rather extreme version of what Michael Shermer refers to this as “rosy retrospection bias,” (The Believing Brain, 2011) or “the tendency to remember past events as being more positive than they actually were,” then it would hardly be surprising that they would adopt attitudes and policies that disfavor the new and different. Continue reading →
My thanks to both Maria H. Andersen and Michael Sacasas for their thoughtful responses to my recent Forbes essay on “10 Things Our Kids Will Never Worry About Thanks to the Information Revolution.” They both go point by point through my Top 10 list and offer an alternative way of looking at each of the trends I identify. What their responses share in common is a general unease with the hyper-optimism of my Forbes piece. That’s understandable. Typically in my work on technological “optimism” and “pessimism” — and yes, I admit those labels are overly simplistic — I always try to strike a sensible balance between pollyannism and hyper-pessimism as it pertains to the impact of technological change on our culture and economy. I have called this middle ground position “pragmatic optimism.” In my Forbes essay, however, I was in full-blown pollyanna mode. That doesn’t mean I don’t generally feel very positive about the changes I itemized in that essay, rather, I just didn’t have the space in a 1,000-word column to identify the tradeoffs inherent in each trend. Thus, Andersen and Sacasas are rightfully pushing back against my lack of balance.
But there is a problem with their slightly pessimistic pushback, too. To better explain my own position and respond to Andersen and Sacasas, let me return to the story we hear again and again in discussion about technological change: the well-known allegorical tale from Plato’s Phaedrus about the dangers of the written word. In the tale, the god Theuth comes to King Thamus and boasts of how Theuth’s invention of writing would improve the wisdom and memory of the masses relative to the oral tradition of learning. King Thamus shot back, “the discoverer of an art is not the best judge of the good or harm which will accrue to those who practice it.” King Thamus then passed judgment himself about the impact of writing on society, saying he feared that the people “will receive a quantity of information without proper instruction, and in consequence be thought very knowledgeable when they are for the most part quite ignorant.”
Filings are due to the Federal Trade Commission (FTC) today as part of its review of the Children’s Online Privacy Protection Act (COPPA) and the COPPA rule that the FTC devised and enforces. I didn’t have time to pen as much as I wanted, but I did submit a short filing to the agency in the matter based on some of my previous work both with Berin Szoka and on my own. Here’s the executive summary for my filing:
It goes without saying that the Children’s Online Privacy Protection Act (COPPA) is complicated law and rule. When considering the rule and proposals to amend it, it is easy to get lost in the weeds and ignore the bigger picture. That would be a mistake. There are broader, more important questions that need to be asked as part of the Federal Trade Commission’s effort to expand this regulatory regime. These questions involve not only the costs of increased regulation for online business interests, but the impact of expanded regulation on market structure, competition, and innovation. More importantly, these questions cut to the core of whether the public (including children) will be served with more and better digital innovations in the future. There is no free lunch. Regulation—even well-intentioned regulation like COPPA—is not a costless exercise. There are profound trade-offs for online content and culture that must always be considered.
Whatever one thinks about the effectiveness or sensibility of the COPPA regulatory model for the Web 1.0 world, it is clear that the regime is being strained by the unforeseen realities of the Web 2.0 world of hyper-ubiquitous connectivity and user-generated content creation and sharing. The digital genie cannot be put back in the bottle. While COPPA may continue to have a marginal role to play in this rapidly evolving world, that role will likely be increasingly limited by the inherent realities of the information age.
Tis the season to be thankful for a great number of things — family, health, welfare, etc. I certainly don’t mean to diminish the importance of those other things by suggesting that technological advances are on par with them, but I do think it is worth celebrating just how much better off we are because of the amazing innovations flowing from the information revolution and digital technology. In my latest Forbes column, I cite ten such advances and couch them in an old fashion “kids-these-days” sort of rant. My essay is entitled, “10 Things Our Kids Will Never Worry About Thanks to the Information Revolution,” and it itemizes some things that today’s digital generation will never experience or have to worry about thanks to the modern information revolution. They include: taking a typing class, buying an expensive set of encyclopedias, having to pay someone else to develop photographs, using a payphone or racking up a big “long-distance” bill, and six others.
Incidentally, this little piece has reminded me how Top 10 lists are the equivalent of oped magic and link bait heaven. People have a way of fixating on lists — Top 3, Top 5, Top 10, etc — unlike any other literary or rhetorical device. In fact, with roughly 80,000 views and over 900 retweets, I am quite certain that this is not only my most widely read Forbes column to date, but quite possibly the most widely read thing I have done in 20 years of policy writing. Therefore, henceforth, every column I pen will be a “Top 10″ list! No, no, just kidding. But make no doubt about it, that little gimmick works. In fact, 4 of the top 5 columns on Forbes currently are lists.
Back in September, the Senate Judiciary Committee’s Antitrust Subcommittee held a hearing on “The Power of Google: Serving Consumers or Threatening Competition?” Given the harsh questioning from the Subcommittee’s Chairman Herb Kohl (D-WI) and Ranking Member Mike Lee (R-UT), no one should have been surprised by the letter they sent yesterday to the Federal Trade Commission asking for a “thorough investigation” of the company. At least this time the danger is somewhat limited: by calling for the FTC to investigate Google, the senators are thus urging the agency to do . . . exactly what it’s already doing.
So one must wonder about the real aim of the letter. Unfortunately, the goal does not appear to be to offer an objective appraisal of the complex issues intended to be addressed at the hearing. That’s disappointing (though hardly surprising) and underscores what we noted at the time of the hearing: There’s something backward about seeing a company hauled before a hostile congressional panel and asked to defend itself, rather than its self-appointed prosecutors being asked to defend their case.
Senators Kohl and Lee insist that they take no position on the legality of Google’s actions, but their lopsided characterization of the issues in the letter—and the fact that the FTC is already doing what they purport to desire as the sole outcome of the letter!—leaves little room for doubt about their aim: to put political pressure on the FTC not merely to investigate, but to reach a particular conclusion and bring a case in court (or simply to ratchet up public pressure from its bully pulpit). Continue reading →
Today, AT&T announced they had abandoned their planned acquisition of T-Mobile after the DOJ sued to block the deal and the FCC published a report sharply critical of the deal. The following statement can be attributed to TechFreedom Fellows Larry Downes, Geoffrey Manne and Berin Szoka:
Nearly two years ago, the Obama FCC declared a spectrum crisis. But Congress has refused to authorize the agency to reallocate underused spectrum from television broadcasters and government agencies—which would take years anyway.
The AT&T/T-Mobile merger would have eased this crisis and accelerated the deployment of next-generation 4G networks. The government killed the deal based on formalistic and outdated measures of market concentration—even though the FCC’s own data show dynamic competition, falling prices, and new entry. The disconnect is jarring.
Those celebrating the deal’s collapse will wake up to a sober reality: There is no Plan B for more spectrum. All the hand-wringing about “preserving” competition has only denied consumers a strong 4G LTE competitor to compete with Verizon—and slammed the brakes on continued growth of the mobile marketplace.
Unfortunately, this is just part of a broader pattern of regulators attempting to engineer technology markets they don’t understand. The letter sent today by the Senate Antitrust Subcommittee urging the Department of Justice to investigate Google’s business practices relies on similar contortions of market definition to conclude that the search market is not competitive. In both cases, regulators are applying 1960s economics to 21st century markets.
Ultimately, it’s consumers who will lose from such central planning.
The FCC’s universal service tax is officially out of control. The agency announced yesterday that the “universal service contribution factor” for the 1st quarter of 2012 will go up to 17.9%. This “contribution factor” is a tax imposed on telecom companies that is adjusted on a quarterly basis to accommodate universal service programs. The FCC doesn’t like people calling it a tax, but that’s exactly what it is. And it just keeps growing and growing. In fact, as the chart below reveals, it has been exploding in recent years. It was in single digits just a few years ago but is now heading toward 20%. And not only is this tax growing more burdensome, but it is completely unsustainable. As the taxable base (traditional interstate telephony) is eroded by new means of communicating, the tax rate will have to grow exponentially or the base will have to be broadened to cover new technologies and services. We should have junked the current carrier-delivered universal service subsidy system years ago and gone with a straight-forward voucher system. A means-tested voucher could have targeted assistance to those who needed it without creating an inefficient, unsustainable hidden tax like we have now. For all the ugly details, I recommend reading all of Jerry Ellig’s research on the issue.