Posts tagged as:

By Berin Szoka & Adam Thierer

We learned from The Wall Street Journal yesterday that “Federal Communications Commission Chairman Julius Genachowski gets a little peeved when people suggests that he wants to regulate the Internet.” He told a group of Journal reporters and editors today that: “I don’t see any circumstances where we’d take steps to regulate the Internet itself,” and “I’ve been clear repeatedly that we’re not going to regulate the Internet.”

We’re thankful to hear Chairman Julius Genachowski to make that promise. We’ll certainly hold him to it. But you will pardon us if we remain skeptical (and, in advance, if you hear a constant stream of “I told you so” from us in the months and years to come). If the Chairman is “peeved” at the suggestion that the FCC might be angling to extend its reach to include the Internet and new media platforms and content, perhaps he should start taking a closer look at what his own agency is doing—and think about the precedents he’s setting for future Chairmen who might not share his professed commitment not to regulate the ‘net. Allow us to cite just a few examples:

Net Neutrality Notice of Proposed Rulemaking

We’re certainly aware of the argument that the FCC’s proposed net neutrality regime is not tantamount to Internet regulation—but we just don’t buy it. Not for one minute.

First, Chairman Genachowski seems to believe that “the Internet” is entirely distinct from the physical infrastructure that brings “cyberspace” to our homes, offices and mobile devices. The WSJ notes, “when pressed, [Genachowski] admitted he was referring to regulating Internet content rather than regulating Internet lines.” OK, so let’s just make sure we have this straight: The FCC is going to enshrine in law the principle that “gatekeepers” that control the “bottleneck” of broadband service can only be checked by having the government enforce “neutrality” principles in the same basic model of “common carrier” regulation that once applied to canals, railroads, the telegraph and telephone. But when it comes to accusations of “gatekeeper” power at the content/services/applications “layers” of the Internet, the FCC is just going to step back and let markets sort things out? Sorry, we’re just not buying it. Continue reading →

Most of you have probably already seen this but Pingdom recently aggregated and posted some amazing stats about “Internet 2009 In Numbers.”  Worth checking them all out, but here are some highlights:

  • 1.73 billion Internet users worldwide as of Sept 2009; 18% increase in Internet users since previous year.
  • 81.8 million .COM domain names at the end of 2009; 12.3 million .NET & 7.8 million .ORG
  • 234 million websites as of Dec 2009; 47 million were added in 2009.
  • 90 trillion emails sent on the Internet in 2009; 1.4 billion email users worldwide.
  • 26 million blogs on the Internet.
  • 27.3 million tweets on Twitter per day as of Nov 2009.
  • 350 million people on Facebook; 50% of them log in every day; + 500,000 active Facebook applications.
  • 4 billion photos hosted by Flickr as of Oct 2009; 2.5 billion photos uploaded each month to Facebook.
  • 1 billion videos served by YouTube each day; 12.2 billion videos viewed per month; 924 million videos viewed per month on Hulu in the US as of Nov 2009; + the average Internet user in the US watches 182 online videos each month.

And yet some people claim that digital generativity and online innovation are dead!   Things have never been better.

Over at Mashable, Ben Parr has a post (“Facebook Turns to the Crowd to Eradicate Offensive Content“) expressing surprise that Facebook has a crowdsourcing / community policing solution to deal with objectionable content:

Did you know that Facebook has a crack team of employees whose mission is to deal with offensive content and user complaints? Their ranks number in the hundreds. But while most websites have people on staff to deal with porn and violence, none of them have 350 million users to manage… Now the world’s largest social network found a way to deal with this shortage of manpower, though. Facebook has begun testing a new feature called the Facebook Community Council [currently invite-only]. According to a guest post on the Boing Boing blog by one of the council’s members, its goal is to purge Facebook of nudity, drugs, violence, and spam.

The Facebook Community Council is actually a Facebook app and tool for evaluating content for various offenses… The app’s tagging system allows council members to tag content with one of eight phrases: Spam, Acceptable, Not English, Skip, Nudity, Drugs, Attacking, and Violence. If enough council members tag a piece of content with the same tag, action is taken, often a takedown.

What Facebook is doing here is nothing all that new.  Many other social networking sites or platforms such MySpace, Ning, and many others, do much the same. Video hosting sites like YouTube do as well. [See my summary of YouTube’s efforts down below]**

No doubt, some will be quick to decry “private censorship” with moves by social networking sites, video hosting sites, and others to flag and remove objectionable content within their communities, but such critics need to understand that:
Continue reading →

Three months ago, when the DC Circuit struck down the FCC’s “Cable Cap”—which prevented any one cable company from serving more than 30% of US households out of fear that he larger cable companies would use their “gatekeeper” power to restrict programming—the New York Times bemoaned the decision:

The problem with the cap is not that it is too onerous, but that it is not demanding enough.

Even with the cap — and satellite television — there is a disturbing lack of price competition. The cable companies have resisted letting customers choose, a la carte, the channels they actually watch….

[The FCC] needs to ensure that customers have an array of choices among cable providers, and that there is real competition on price and program offerings.

Perhaps the Times‘ editors should have consulted with the Lead Technology Writer of their excellent BITS blog.  Nick Bilton might have told him the truth: “Cable Freedom Is a Click Away.”  That’s the title of his excellent survey of devices and services (Hulu, Boxee, iTunes, Joost, YouTube, etc.) that allow users to get cable television programming without a cable subscription.

Nick explains that consumers can “cut the video cord” and still find much, if not all, their favorite cable programming—as well as the vast offerings of online video—without a hefty monthly subscription.  (Adam recently described how Clicker.com is essentially TV guide for the increasing cornucopia of Internet video.)  This makes the 1992 Cable Act’s requirement that the FCC impose a cable cap nothing more than the vestige of a bygone era of platform scarcity, predating not just the Internet, but also competing subscription services offered by satellite and telcos over fiber.  That’s precisely what we argued in PFF’s amicus brief to the DC Circuit a year ago, and largely why the court ultimately struck down the cap.

Bilton notes that “this isn’t as easy as just plugging a computer into a monitor, sitting back and watching a movie. There’s definitely a slight learning curve.”  But, as he describes, cutting the cord isn’t rocket science.  If getting used to using a wireless mouse is the thing that most keeps consumers “enslaved” to the cable “gatekeepers” the FCC frets so much about, what’s the big deal?  Does government really need to set aside the property and free speech rights of cable operators to run their own networks just because some people may not be as quick to dump cable as Bilton?  Is the lag time between early adopters and mainstream really such a problem that we would risk maintaining outdated systems of architectural censorship (Chris Yoo’s brilliant term) that give government control over speech in countless subtle and indirect ways? Continue reading →

The disabled have much to give thanks for this year—but contrary to common assumptions, it’s not for paternalistic government accessibility mandates, regulations or subsidies (see, for example, the FCC’s November 6 Broadband Accessibility workshop), but for the good ol’ fashioned private sector ingenuity that has made America great. Five broad categories of examples suggest how constantly-improving computing power and innovation can make life easier for many, if not all, disabled users—and how market forces empower the disabled along with everyone else.

Video transcription. Last week, Google announced “the preliminary roll-out of automatic captioning in YouTube, an innovation that takes advantage of our speech recognition technology to turn the spoken word into text captions.” Google uses the same speech recognition technology it refined with its free Goog-411 and Google Voice services to automatically transcribe video dialog (which can also be automatically translated using Google’s translation engine). Why? Not because of any government mandate, but because of some combination of three factors: (i) it’s an easy way for Google to invest in its “reputational capital,” (ii) the underlying technologies of transcribing videos make videos easier to use for all users, not just the hearing-impaired, and (iii) those technologies also make it possible to contextually target advertising to the verbal content of videos.

It’s worth noting that Hulu currently offers closed captioning for some of its television programming but notes that “closed-captioning data that’s used for broadcast TV isn’t easily translated for online use.” The online television clearinghouse promises to offer more closed-captioning soon. Perhaps they ought to license Google’s algorithmic transcription?

Voice recognition for direct consumer use—most notably, Dragon NaturallySpeaking 10, the latest version of the leading voice recognition software, which was released in summer 2008 but only recently seems to have really hit critical mass. Continue reading →

The Internet is massive. That’s the ‘no-duh’ statement of the year, right?  But seriously, the sheer volume of transactions (both economic and non-economic) is simply staggering.  Consider a few factoids to give you a flavor of just how much is going on out there:

  • In 2006, Internet users in the United States viewed an average of 120.5 Web pages each day.
  • There are over 1.4 million new blog posts every day.
  • Social networking giant Facebook reports that each month, its over 300 million users upload more than 2 billion photos, 14 million videos, and create over 3 million events. More than 2 billion pieces of content (web links, news stories, blog posts, notes, photos, etc.) are shared each week. There are also roughly 45 million active user groups on the site.
  • YouTube reports that 20 hours of video are uploaded to the site every minute.
  • Amazon reported that on December 15, 2008, 6.3 million items were ordered worldwide, a rate of 72.9 items per second.
  • Every six weeks, there are 10 million edits made to Wikipedia.

Now, let’s think about how some of our lawmakers and media personalities talk about the Internet.  If we were to judge the Internet based upon the daily headlines in various media outlets or from the titles of various Congressional or regulatory agency hearings, then we’d be led to believe that the Internet is a scary, dangerous place. That ‘s especially the case when it comes to concerns about online privacy and child safety. Everywhere you turn there’s a bogeyman story about the supposed dangers of cyberspace.

But let’s go back to the numbers. While I certainly understand the concerns many folks have about their personal privacy or their child’s safety online, the fact is the vast majority of online transactions that take place online each and every second of the day are of an entirely harmless, even socially beneficial nature.  I refer to this disconnect as the “problem of proportionality” in debates about online safety and privacy. People are not just making mountains out of molehills, in many cases they are just making the molehills up or blowing them massively out of proportion. Continue reading →

I really enjoyed my Second Life appearance on “Government’s Place in Virtual Worlds and Online Communities,” which was hosted by Metanomics.  You can watch the entire segment on the Metanomics site.  But the folks at Metanomics have also posted 6 clips from the show at YouTube that highlight some of the topics we discussed.  Here’s the list of clips and the videos:

Part 1: Are the Feds about to Regulate Second Life & Virtual Worlds?

Continue reading →

Last Wednesday, Holman Jenkins penned a column in The Wall Street Journal about net neutrality (Adam discussed it here). In response, I have a letter to the editor in today’s The Wall Street Journal:

To the Editor:

Mr. Jenkins suggests that Google would likely “shriek” if a startup were to mount its servers inside the network of a telecom provider. Google already does just that. It is called “edge caching,” and it is employed by many content companies to keep costs down.

It is puzzling, then, why Google continues to support net neutrality. As long as Google produces content that consumers value, they will demand an unfettered Internet pipe. Political battles aside, content and infrastructure companies have an inherently symbiotic relationship.

Fears that Internet providers will, absent new rules, stifle user access to content are overblown. If a provider were to, say, block or degrade YouTube videos, its customers would likely revolt and go elsewhere. Or they would adopt encrypted network tunnels, which route around Internet roadblocks.

Not every market dispute warrants a government response. Battling giants like Google and AT&T can resolve network tensions by themselves.

Ryan Radia

Competitive Enterprise Institute

Washington

To be sure, the market for residential Internet service is not all that competitive in some parts of the country — Rochester, New York, for instance — so a provider might in some cases be able to get away with unsavory practices for a sustained period without suffering the consequences. Yet ISP competition is on the rise, and a growing number of Americans have access to three or more providers. This is especially true in big cities like Chicago, Baltimore, and Washington D.C.

Instead of trying to put a band-aid on problems that stem from insufficient ISP competition, the FCC should focus on reforming obsolete government rules that prevent ISP competition from emerging. Massive swaths of valuable spectrum remain unavailable to would-be ISP entrants, and municipal franchising rules make it incredibly difficult to lay new wire in public rights-of-way for the purpose of delivering bundled data and video services.

Google Searching for GrowthThe Google juggernaut’s revenue growth has slowed steadily in the last five years, causing the Wall Street Journal to caution investors about buying Google stock. While much of the slow-down in Google’s revenue may be attributed to the recession, the WSJ cautions that:

  • Microsoft is offering stiffer competition in search, which will only intensify once antitrust regulators approve its partnership with Yahoo! and the two companies actually implement their partnership (which could take another year);
  • YouTube’s promise as an ad platform remains uncertain;
  • Google lags behind Apple and Research in Motion in developing mobile phone operating systems, with Android still unproven;
  • It remains unclear how successful the company will be in expanding beyond its existing lead in small text  ads into the potentially lucrative realm of banner ads.

Somehow I doubt Google’s fall to Earth will do much to allay the concerns of those who see Google as the kind of evil monopolist Microsoft was made out to be in the 90s.

As the Journal concludes, “It would be foolish to predict that Google won’t have another business success, of course… Google may itself discover the next Google-like business.” As long as someone’s out there working to turn today’s idle fantasies into tomorrow’s multi-billion dollar businesses, consumers win—whoever that bold innovator might be.

“Liberty upsets patterns.” That was one of the many lessons that the late Harvard philosopher Robert Nozick taught us in his 1974 masterpiece “Anarchy, State, and Utopia.” What Nozick meant was that there is a fundamental tension between liberty and egalitarianism such that when people are left to their own devices, some forms of inequality would be inevitable and persistent throughout society. (Correspondingly, any attempt to force patterns, or outcomes, upon society requires a surrender of liberty.)

No duh, right? Most people understand this today–even if some of them are all too happy to hand their rights over to the government in exchange for momentary security or some other promise.  In the world of media policy, however, many people still labor under the illusion that liberty and patterned equality are somehow reconcilable. That is, some media policy utopians and Internet pollyannas would like us to believe that if you give every man, woman, and child a platform on which to speak, everyone will be equally heard.  Moreover, in pursuit of that goal, some of them argue government should act to “upset patterns” and push to achieve more “balanced” media outcomes. That is the philosophy that has guided the “media access” movement for decades and it what fuels the “media reformista” movement that is led by groups like the (inappropriately named) Free Press, which was founded by neo-Marxist media theorist Robert McChesney.

Alas, perfect media equality remains an illusive pipe dream. As I have pointed out here before, there has never been anything close to “equal outcomes” when it comes to the distribution or relative success of books, magazines, music, movies, book sales, theater tickets, etc.  A small handful of titles have always dominated, usually according to a classic “power law” or “80-20” distribution, with roughly 20% of the titles getting 80% of the traffic / revenue.  And this trend is increasing, not decreasing, for newer and more “democratic” online media.

For example, recent research has revealed that “the top 10% of prolific Twitter users accounted for over 90% of tweets” and  “the top 15% of the most prolific [Wkipedia] editors account for 90% of Wikipedia’s edits.” As Clay Shirky taught us back in 2003 in this classic essay, the same has long held true for blogging, where outcomes are radically inegalitarian, with a tiny number of blogs getting the overwhelming volume of blogosphere attention.  The reason, Shirky pointed out, is that:

Continue reading →