Over the past year or so, many market-oriented critics of Google, like Scott Cleland and Richard Bennett, have criticized the company for aligning itself with Left-leaning causes and intellectuals. Lately, however, what I find interesting is how many leading leftist intellectuals and organizations have begun turning on the company and becoming far more critical of the America’s greatest capitalist success story of the past decade. The reason this concerns me is that I see a unholy Right-Left alliance slowly forming that could lead to more calls for regulation not just of Google, but the entire search marketplace. In other words, “Googlephobia” could bubble over into something truly ugly.
Consider the comments of Tim Wu and Lawrence Lessig in Jeff Rosen’s huge New York Times Magazine article this weekend, “Google’s Gatekeepers.” Along with Yochai Benkler, Lessig and Wu form the Holy Trinity of the Digital Left; they set the intellectual agenda for the Left on information technology policy issues. Rosen quotes both Wu and Lessig in his piece going negative on Google. Wu tells Rosen that “To love Google, you have to be a little bit of a monarchist, you have to have faith in the way people traditionally felt about the king.” Moreover:
Continue reading →
My good friend Jim Dunstan will be speaking to the “Games Gateway” meet-up group for the U.S. Mid-Atlantic Region on Dec. 2 at 6:30 pm about the legal issues affecting video game developers.
Did you know that enabling gamers to talk via voice while in a virtual world may subject you to FCC regulations? Or that the Child Online Privacy Protection Act under the FTC must be followed for game sites that knowingly include children under the age of 13? Whether you are a developer of console, PC or online games and worlds, there are legal issues which you need to keep in mind. Many of them are surprising, so join us to hear James Dunstan, partner at Garvey Schubert and Barer, an expert in video game and telecommunications law, discuss the ins and outs of interesting legal issues, what you as a game developer need to keep in mind, and steps to take as you develop your next game.
Besides being a space/Internet/communications lawyer (my alter ego!), Jim’s a video game programmer himself and has spent years advising video game clients. RSVP here.
Last week I discussed Barbara Esbin’s new PFF paper about the FCC’s absurd investigation into how the cable industry is transitioning analog customers over to digital. This is an essential transition is the cable industry is going to free up bandwidth to compete against telco-provided fiber offerings in the future. The faster the cable industry can migrate its old analog TV customers over to the digital platform, the more bandwidth they can re-deploy for high-speed Net access and services. Mark Cuban helps put things in perspective:
1. the only thing that cable companies, and satellite for that matter have to sell is bandwidth and the applications they can run on that bandwith. More bandwidth means more digital everything.
2. For Basic Cable subscribers that get say, 40 analog channels, they are consuming 40 x 38.6mbs or 1.54 Gbs. Let that sink in. 1.54 Gbs of bandwidth. Compare that to how fast your internet access is. That more bandwidth than your entire neighborhood consumes online, by a lot.
Thats also the equivalent of 500 standard def digital channels. If you convert that to revenue per bit for cable companies, or cost per bit for basic cable consumers, the basic cable customers are getting the best deal in town. By a long shot.
Digital cable customers, not so much. Digital customers are paying multiples of analog customers for bandwidth. In reality, analog customers are getting an amazing deal, and the cable companies have been hesitant to convert them only because of the potential FCC backlash.
I’m as cynical as the next guy when it comes to cable rates and motivations, but the reality is that the longer analog remains, the fewer opportunities to leverage the freed up bandwidth to create next generation bandwidth hog applications. Will the cable companies charge us an a lot for that bandwidth, probably. But when we start to see applications built on top of 250mbs per second and more, it will have far more value to society than watching USA Network on your old analog TV. And Net Neutrality? Well if everyone had that 1.54gbs available to them, net neutrality would be a non issue. We wouldn’t be arguing about access or pre-emption, we would be arguing about quality of service.
Once again we are reminded that all regulations have opportunity costs and in this case the FCC’s actions could cost consumers the loss (or at least delay) of higher-speed broadband offerings in the near-term.
Incidentally, the bureaucratic dynamic I wrote about in my last post also explains why efforts to “reinvent government” to reduce “waste, fraud, and abuse” never work very well. The problem isn’t that there’s no waste, fraud or abuse, or even that these policies don’t succeed in rooting some of it out. Rather, the problem is that policies designed to make government more efficient and accountable accumulate over time. So the new policies the Obama administration implements to deal with mismanagement that occurred under the Bush administration will largely co-exist with policies implemented under the Clinton, Reagan, Carter, Johnson, Kennedy, and Eisenhower administrations to deal with problems observed under their predecessors. There was probably a good reason for each set of rules by itself, but when you add them up, the result is a kind of death of a thousand cuts where federal bureaucrats can’t get anything done because doing anything requires a huge amount of paperwork. And then of course we have “paperwork reduction acts” where we hire a whole new set of bureaucrats to promulgate still more rules ostensibly designed to make the earlier rules less complicated. Not surprisingly, it doesn’t work especially well.
Another great essay from Paul Graham:
Checks on purchases [at large companies] will always be expensive, because the harder it is to sell something to you, the more it has to cost. And not merely linearly, either. If you’re hard enough to sell to, the people who are best at making things don’t want to bother. The only people who will sell to you are companies that specialize in selling to you. Then you’ve sunk to a whole new level of inefficiency. Market mechanisms no longer protect you, because the good suppliers are no longer in the market.
Such things happen constantly to the biggest organizations of all, governments. But checks instituted by governments can cause much worse problems than merely overpaying. Checks instituted by governments can cripple a country’s whole economy. Up till about 1400, China was richer and more technologically advanced than Europe. One reason Europe pulled ahead was that the Chinese government restricted long trading voyages. So it was left to the Europeans to explore and eventually to dominate the rest of the world, including China.
In more recent times, Sarbanes-Oxley has practically destroyed the US IPO market. That wasn’t the intention of the legislators who wrote it. They just wanted to add a few more checks on public companies. But they forgot to consider the cost. They forgot that companies about to go public are usually rather stretched, and that the weight of a few extra checks that might be easy for General Electric to bear are enough to prevent younger companies from being public at all.
One of the most challenging things about this kind of institutional bloat is that it’s extremely hard to articulate to those in positions of authority precisely how damaging this kind of institutional overhead can be. I’ve been involved in at least one organization (which shall remain nameless) that had created elaborate processes for reviewing and double-checking moderately expensive purchases. And the phenomenon Graham described applied with a vengeance in those cases. The minor cost was the dozens of hours devoted to making the case to the relevant decision-makers for our preferred option. But the more important, but harder to articulate, cost was the way the approval requirements distorted the decision-making process. Since we’d have to defend our choices to decision-makers who didn’t know very much about the options, we tended to overweight factors that could be clearly and easily explained to the decision makers (“This supplier is the market share leader,” “this candidate has a PhD”) and underweight more important but harder-to-articulate qualities of the various options. And because we had to gather a lot of mostly useless information about the options to present to the decision-makers, the most qualified suppliers would often opt out of dealing with us, because they could get business with a lot less hassle elsewhere.
And this organization was not especially large, in the grand scheme of things. These problems tend to get worse as an institution gets larger and older. I was just talking to a friend who works with a firm that provides services to large banks, and she was complaining about the large amounts of money large companies waste on this kind of overhead. And the condition is almost terminal in one of America’s largest and oldest institutions—the federal government. The reason we have $600 toilet seats isn’t that the people buying them are corrupt or incompetent. It’s that selling a toilet seat to the federal government costs $50 to manufacture the toilet seat and $550 to fill out the relevant paperwork.
Via Jeff Jonas, who oh-so-carefully assessed the treatment he received in Stephen Baker’s book The Numerati, I came across this NPR interview with Baker.
In the latter part of the interview, Baker discusses pretty accurately Jonas’ dissent from the passion for predictive data mining in the national security world. That dissent was given expression in the paper Jeff and I wrote, “Effective Counterterrorism and the Limited Role of Predictive Data Mining.”
The data intelligentsia are an interesting subject for a book, of course – it looks The Numerati may have a lot of similarities to Robert O’Harrow’s No Place to Hide – and the NPR interview is interesting. But what makes it notable is Baker’s economic literacy. Or, more accurately, his lack of economic literacy.
Now, I’m not an economist either, so I’ll stand for correction in the comments (actual economists preferred, not just people with strong opinions, please).
Continue reading →
It’s sad that it even needs to be said, but Mike Masnick reminds us that if you’re writing about “Digital Socialism and the Tyranny of the Consumer” then you’re deeply, deeply confused. The “tyranny of the consumer” is the distinctive feature of free-market economies. And if we were going to label someone in the copyright debate “socialist,” it would be those who advocate government-granted monopolies in the reproduction of creative works, not those who want to repeal them. The author of this piece seems not to grasp the distinction between collectively-owned resources and unowned resources. Here’s a handy cheat sheet:
Collectively Owned |
Unowned |
The US Post Office
The British National Health Service
American public schools
AIG
Amtrak
The Cuban economy
| Air
Sunlight
The Bible
Tom Sawyer
War and Peace
The TCP/IP protocols |
If you’re a supporter of the kinds of institutions we find in the first column, it might be reasonable to call you a “socialist.” If you support those in the second column, not so much.
Over on the Cato@Liberty blog, I’ve done a fairly lengthy write-up of the Google Flu Trends privacy issue. It’s an important problem that I think deserves a little more than dismissal.
My conclusion: “The heart of the problem lies not with the current leader in search, or any other Internet innovator. The problem lies with our unconstrained government.”
If you’re inclined to dismiss this conclusion as libertarian boilerplate, please read the post.
Remember being in grade school when a classmate’s rabble rousing would ruin it for everybody, and the teacher would hold back the class from going to recess? The other students would moan and groan and justifiably feel that punishing the entire class for one person’s misdeeds was unfair. This is what I fear for the tech industry, due to the trouble-making of the financial crisis.
If politicians turn their gaze from the financial sector to tech in 2009, they will likely focus on the issue of personally identifiable information (PII) and privacy. And here’s why.
Now is a tenuous time for all markets. Our politicians are looking to reassure Americans by regulating areas of the economy where there’s minimal regulation and a perceived lack of transparency. Rightly or wrongly, there’s a perceived lack of transparency in the collection and use of a user’s online data. And the following overarching trends and recent events could combine to further a pro-regulatory privacy agenda in 2009:
- Criminal Abuse: The Federal Trade Commission estimates that as many as 9 million Americans have their identities stolen each year. Increasingly, identity thieves use online phishing scams to deceive consumers. Phishers and other online criminals exist because more and more of us are in the pond—we store and access our personal and sensitive data online.
- Private Abuse: The lack of transparency surrounding the failed experiment with NebuAd’s deep packet inspection behavioral tracking by some ISPs.
- Private Use: The collection of user information is at the center of the web-delivered content and social media that helps define the Web2.0 economy. According to the Interactive Advertising Bureau, Internet advertising revenues for the first six months of 2008 were $11.5 billion, a 15.2 percent increase over the first half of 2007. Search histories and stored cookie profiles help create a user dossier that ad companies use to display specific ads, and to help generate the more than $120 billion in online retail sales expected this year.
- Market Power: The DOJ investigation of the Google and Yahoo deal based on the market power of Google creates a big brother stigma on all online data collection practices.
Continue reading →