My piece about the U.S. Chamber of Commerce event last Friday on U.S. intellectual property attachés giving a report, and taking a hard line, on the enforcement of U.S. intellectual property, overseas, is now live on ip-watch.org.
Here’s the first couple of paragraphs:
WASHINGTON, DC – Nations ranging from Brazil to Brunei to Russia are failing to properly protect the intellectual property assets of US companies and others, and international organisations are not doing enough to stop it, seven IP attachés to the US Foreign and Commercial Service lamented recently.
Meanwhile, an industry group issued detailed recommendations for the incoming Obama administration’s changes to the US Patent and Trademark Office.
The problems in other nations extend from Brazil’s failure to issue patents for commercially significant inventions by US inventors, to an almost-complete piracy-based economy in Brunei, to an only-modest drop in the rate of Russian piracy from 65 percent to 58 percent.
The attachés, speaking at an event organised by the US Chamber of Commerce and its recently beefed-up Global Intellectual Property Center (GIPC), blasted the record of familiar intellectual property trouble zones like Brunei, Thailand and Russia.
But the problems extend to the attitudes and omissions of major trading partners like Brazil, India and even well-developed European nations, said the attachés.
Too many advocates of regulation seem to have never considered the possibility that the FCC bureaucrats in charge of making these decisions at any point in time might be lazy, incompetent, technically confused, or biased in favor of industry incumbents. That’s often what “real regulators” are like, and it’s important that when policy makers are crafting regulatory scheme, they assume that some of the people administering the law will have these kinds of flaws, rather than imagining that the rules they write will be applied by infallible philosopher-kings.
Ironically, Prof. Lessig — who typically defends many forms of high-tech regulation like Net neutrality and online content labeling — is essentially agreeing with Tim’s critique of bureaucracy. But Lessig seems to ignore the underlying logic of Tim’s critique and instead imagines that we need only reinvent bureaucracy in order to save it. But I’m getting ahead of myself. First, let’s hear what Lessig proposes.
In a Newsweek column this week entitled “Reboot the FCC,” Lessig argues that the FCC is beyond saving because, instead of protecting innovation, the agency has succumb to an “almost irresistible urge to protect the most powerful instead.” Consequently, he continues:
The solution here is not tinkering. You can’t fix DNA. You have to bury it. President Obama should get Congress to shut down the FCC and similar vestigial regulators, which put stability and special interests above the public good. In their place, Congress should create something we could call the Innovation Environment Protection Agency (iEPA), charged with a simple founding mission: “minimal intervention to maximize innovation.” The iEPA’s core purpose would be to protect innovation from its two historical enemies–excessive government favors, and excessive private monopoly power.
As was the case with his earlier call to “blow up the FCC,” I am tickled to hear Lessig call for shutting down an agency that many of us have been fighting against for the last few decades. (Here’s a 1995 blueprint for abolishing the FCC that I contributed to, and here’s PFF’s recent “DACA” project to comprehensively reform and downsize the agency.)
But is Lessig really calling for the same sort of sweeping regulatory reform and downsizing that others have been calling for? And has he identified the real source of the problem that he hopes to correct? I don’t think so. There are 3 basic problems with the argument Lessig is putting forward in his essay. I will address each in turn.
Don’t miss Jim Harper’s excellent post on the strange way people have responded to the failures of regulation on wall street. In a Meet the Press exchange, we learn that people reported Bernie Madoff’s suspicious books to the SEC, which chose not to do anything about it. And it was agreed around the table that the Madoff affair debunks “the idea that wealthy individuals and ‘sophisticated’ institutional investors don’t need the protection of government regulators.” “There’s no question we need a real regulator,” says CNBC’s Erin Burnett.
The problem is that we had a “real regulator.” Ponzi schemes and dishonest bookkeeping are already illegal. Had the SEC been so motivated, it had all the authority it needed to investigate Madoff’s books, discover the problems, and shut his firm down. In a rational world, this would be taken as a cautionary tale about the dangers of assuming that regulators will be vigilant, competent, or interested in defending the interests of the general public rather than those with political clout. Instead, we live in a bizarro world in which people believe that the SEC’s failure to do its job is an illustration of the need to give agencies like the SEC more power.
We of course see the same sort of confusion in debates over regulation of the technology sector. For example, the leading network neutrality proposals invariably wind up placing a significant amount of authority in the hands of the FCC to decide the exact definition of network neutrality and to resolve complex questions about what constitutes a network neutrality violation. Too many advocates of regulation seem to have never considered the possibility that the FCC bureaucrats in charge of making these decisions at any point in time might be lazy, incompetent, technically confused, or biased in favor of industry incumbents. That’s often what “real regulators” are like, and it’s important that when policy makers are crafting regulatory scheme, they assume that some of the people administering the law will have these kinds of flaws, rather than imagining that the rules they right will be applied by infallible philosopher-kings.
A couple of quick follow-ups to my last post: first, a commenter points out that Career Builder is an example of a successful spin-off from a major media company. (Actually, from three media companies; I bet having ownership by multiple companies helped insulate the company from any one firm’s internal politics). So it appears that the spin-off model can work.
Second, the always-interesting Tom Lee points to the Washington Post’s online operation as an example of the spin-off model. This is a really interesting example because it’s closer to the core of the WaPo‘s business than Career Builder is to Gannett’s. And by all accounts, it was relatively successful. I’m pretty sure I’ve read multiple people comment that the Post is a local newspaper with a national website, which is precisely what you’d want a successful spin-off news organization to do.
The problem is that washingtonpost.com is nowhere close to being a free-standing organization. They get tremendous benefit from having access to content from the print Post, and while I haven’t looked at their business model in any detail, I’d be willing to bet that there’s massive cross-subsidy going on. That makes it a better website, but the problem is that it relieves the web side of the business of the need to come up with new, lower-cost methods for generating news. Which means that if and when the print side hits an iceberg, the online side won’t be able to stand on its own.
Having recently read The Innovator’s Dilemma, it’s worth pointing out that the discussion Ezra Klein and Matt Yglesias are having about the decline of newspapers is a classic illustration of the principles Clayton Christensen laid out a decade ago. Internet news is a classic disruptive technology. At its outset, it was simple, dirt cheap, and in many ways inferior to established journalism. But it improved over time, and once it began to rival traditional journalistic outfits in quality around the middle of this decade, the “dirt cheap” part of the equation began to dominate. When your competition can produce a roughly comparable product for a small fraction of the cost, your days are numbered.
But here’s the really important point that Christensen made that is often missed in these kinds of discussions: it’s often close to impossible for an organization built around an older technology to retool for a new, disruptive one because their cost structures just don’t allow it. The New York Times is an expensive place to run. It’s got writers, editors, typesetters, delivery trucks, an ad sales force, a big building, travel budgets, and so forth. In order to recoup those costs, they have to make a certain amount of revenue per unit of output. The institutional structure of the New York Times makes it almost impossible for it to produce news the way TPM Muckraker or Ars Technica do. The need to make payroll and cover their rent makes it almost mandatory for them to focus on their traditional core competencies because even as those markets shrink they still offer better margins than the emerging businesses.
Matt’s suggestion of launching NYTList a decade ago illustrates the point well. It’s true that in the long run this probably would have made the Times more money. But in the short run this would have been a truly wrenching transition. At a time when other papers were enjoying fat margins from their classified business, the Times would find more and more of its classified customers switching to the new version. It would have had to start laying off the classified staff and trimming other parts of the budget to cover the lost revenue. And it would have been a huge gamble. It was far from obvious in 2000 that Craigslist would be as big as it has become. So yes, theoretically an enlightened NYT manager could have foreseen the growth of Craig’s List and countered it. But in practice doing so would have required super-human foresight and determination, and an extremely deferential board of directors.
Christensen’s conclusion is that the only way to avoid this grim fate is to spin off an independent subsidiary that can pursue new markets without worrying about fat profit margins or cannibalization of existing product lines. GM’s spin-off of Saturn in the 1980s is a good example of this model. This is still an extremely difficult thing to pull off. It takes a CEO with the foresight to see what’s coming and the political capital within the firm to shield the spin-off from the parent company’s politics. I’m not aware of any high-profile newspaper firms that attempted this, but I’m not sure we can really blame the newspaper managers. It’s a really hard thing to pull off. Christensen was only able to find a handful of firms—in any industry—that pulled it off successfully, and the CEOs who did it almost all said that it was one of the most difficult things they did as managers.
Companies are not big people. They change much more slowly than individual people do. And anyone suggesting that a firm should do things in a new way—even the guy at the top—is going to face strong pressures from traditionalists who want to continue doing things the old way. And in the short run, the traditionalists are almost always right. The old way of doing things is almost always going to be more profitable in the short run. So although I think those who predicted the newspaper industry’s decline are entitled to a certain amount of smugness, I think it’s absolutely not fair excoriate the managers who failed to move more decisively to address the problem. With the benefit of 20/20 hindsight, it’s easy to come up with scenarios that would have turned out better. But from an ex ante perspective, these trends were far from clear, and the people making the decisions were under tremendous pressure to continue the status quo.
Mark Cuban probably didn’t know how much he’d rev up the hypocrisy meter when he suggested that the government should report its own spending and other financial information in XBRL. The SEC recently announced that it would require public companies to do their financial reporting in the format.
Having the government do it to is a GREAT idea.
And it will take years for that to happen.
Why? Because releasing information in a usable form is like releasing power. Agencies and bureaucrats aren’t in the business of giving away power.
I won’t lay predictions because the idea is so good that it may catch a head of steam, unify the transparency community, and get high-level attention in the administration. But barring that, it will be a cold day (today happens to be a cold day) when the government adopts XBRL. Until then, the hypocrisy meter is rising.
Mike Palage, the first Adjunct Fellow at PFF’s Center for Internet Freedom, just published the following piece on the PFF blog.
ICANN‘s plan to begin accepting applications for new generic top-level domains (gTLDs) in mid-2009 may have been derailed by last week’s outpouring of opposition from the global business community and the United States Government (USG). Having been involved with ICANN for over a decade and having served on its Board for three years, I’ve never seen such strong and broad opposition to one of ICANN’s proposals.
This past June, the ICANN Board directed its staff to draft implementation guidelines based upon the policy recommendations of the Generic Names Supporting Organization (GNSO) that ICANN should allow more gTLDs such as .cars to supplement existing gTLDs such as .com. In late October, the ICANN staff released a draft Applicant Guidebook detailing its proposal. The initial public forum on this proposal closed on December 15-with over 200 comments filed online.
In its December 18 comments, the USG questioned whether ICANN had adequately addressed the “threshold question of whether the consumer benefits outweigh the potential costs.” This stinging rebuke from the Commerce Department merely confirms the consensus among the 200+ commenters on ICANN’s proposal: ICANN needs to do more than merely rethinking its aggressive time-line for implementing its gTLD proposal or tweaking the mechanics of the proposal on the edges. Instead, ICANN needs to go back to the drawing board and propose a process that results in a responsible expansion of the name space, not merely a duplication of it.
Regular readers will recall my great interest in video games and the public policy debates surrounding efforts to regulate “violent” games in particular. One thing I bring up in almost every essay I write on this subject is how fears about kids and video games are almost always overblown and that kids can typically separate fantasy from reality. Nonetheless, kids have active imaginations and adults sometimes fear that which they cannot understand or appreciate. Friendly mentoring and open-minding parenting can go a long way to encouraging kids to make smart choices and understand where to draw lines, whereas efforts to demonize video games and youth culture almost always backfire.
Anyway, what got me thinking about all this again was an entertaining column in today’s Washington Post by Ron Stanley (“Who Needs a TV to Play Video Games“), which describes the author’s experiences with his nephew when they played out video game-like scenarios using traditional toys and household items. It’s a wonderful piece worth reading in its entirety, but here’s the key takeaway that I’d like to discuss:
There was no evidence that television and video games had stifled the kids’ creativity. Nor was there any evidence that technology had made them smarter than earlier generations. They simply had a different frame of reference, one that included video games and computers as well as ponies, pet stores and sword fights. Children play with the tools at hand, and they’re great at thinking metaphorically — at imagining that a landspeeder is a sentient robot or that a stick is a gun or that salt-and-pepper shakers are a bride and groom or that a card table is a horse’s stable.
They’re also geniuses at figuring out simple mechanics. My 6-year-old nephew had to explain to me that miniature low-rider cars don’t roll very well on carpet and will flip over more than if racing on hardwood floors. Novice that I was, I was choosing cars that looked the coolest. And they are geniuses at intuiting rules and systems, and at re-creating these rules and systems in their own play. Children who play lots of card games will invent their own card games. Children who play lots of board games will invent their own board games. And children who play lots of video games will invent their own video-game-like games when they don’t have access to the game controllers.
“Damn their lies and trust your eyes. Dig every kind of fox!” I here sing one for the freedom to mix it up as you and your honey alone see fit:
“Hapa” means “mixed race” in Hawaiian. Skin-tone mash ups have profoundly enriched my life, first with the Honolulu Hapa herself and then with our own littlehapas.Honolulu Hapa celebrates coloring across the lines, knocks racism, and gives a shout-out to Loving v. Virginia, 88 U.S. 1 (1967)—the case where the U.S. Supreme Court struck down anti-miscegenation laws as unconstitutional restraints on personal liberty.
As with the prior four songs I’ve posted in this recent series (Take Up the Flame,Sensible Khakis,Nice to Be Wanted, and Hello, Jonah,), Honolulu Hapa comes with a Creative Commons license that allows pretty liberal use by all but commercial licensees, who have to pay a tithe to one of my favorite causes. Honolulu Hapa aims to help Creative Commons, an organization that helps all of us to mix—and remix—it up. Unlike those other songs, however, Honolulu Hapa adds a special ‘unrestricted use” term effective on June 12, Loving Day.
With Honolulu Hapa, I conclude my recent series of freedom-loving music videos. Like it or not, though, I’ve got more music-making plans. Next, I’ll record some good studio versions of those (and perhaps some other) songs. Eventually, I’d like to release a fundraising CD, one that might help out some good causes. Silly? Yeah, I guess so. But it does add another data point in support of my hypothesis: Freedom has more fun.
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →