Technology, Business & Cool Toys

freeCome one, come all. ACT will be hosting a lunch event next Tuesday (June 23) at noon on privacy, free software, and government procurement.

We’ll discuss “free” software (ie. no license fees, free as in beer). It’s a nuanced take on some of what Chris Anderson will surely be talking about in his upcoming book on Free—where does the $ come from in software that we all use for free on the web, or that we download to our computer?

To answer this question, we’ll attempt to update traditional Total Cost of Ownership analysis for ad-based software and services. There’s a lot of discussion about privacy, security and sustainability considerations of cloud based solutions. In addition, the event will deal with skeptics who think that “free” means no business model at all. We’ll describe how free software and services are usually just one aspect of a larger enterprise geared toward expanding market penetration and increasing revenues. Mike Masnick described this in a recent Techdirt post.

I’m going to moderate, and our speakers will be Rob Atkinson at ITIF, Tom Schatz at CAGW, and Peter Corbett of iStrategyLabs.

We’ll be releasing a paper on all this, so come join us for lunch and a lively discussion–and best of all, it’s FREE!!

Further details are here.

Via Kevin Kelly I see that at some point Forbes magazine produced this chart measuring technology diffusion rates for various media and communications technologies since their year of inception.
Forbes tech diffusion chart
I found this of great interest because, since the mid-90s, I have been putting together various charts and tables illustrating technological diffusion [most recently I did this in my “Media Metrics” report] and this particular chart is quite challenging since you are forced to pick a “Year 1” date to begin each of the “S curves.” For example, what is “Year 1” for electricity or telephony on one hand, or the PC or the Internet on the other? That’s not always easy to determine since it is unclear when certain technologies were “born.”

Regardless, no matter how you cut it, the more modern and the less regulated the technologies, the quicker they get to market. Here’s a couple of my recent charts illustrating that fact. The first shows how long it took before various technologies reached 50% household penetration. The second illustrates the extent of household diffusion over time.


However, as Kevin Kelly notes, we usually never see any technology hit 100% household penetration (although the boob tube got close!):
Continue reading →

NebuAd is Dead

by on May 19, 2009 · 14 comments

NebuAd is dead. The company‘s plan to track users through their ISPs for the purpose of targeting advertising met with public and congressional concern that ultimately led to its demise.

I believe that ISPs should stick to serving bits and not get into the business of serving or helping to serve ads, so I’m glad to see NebuAd’s model fail. I’ve been made aware by a similar company – Phorm – of the privacy sensitivity they design into their system, but the answer for me is still “No, thanks.”

In terms of policy, this story is mixed. Fans of government involvement probably believe that concerns expressed by public authorities caused NebuAd’s partners to pull out. ISPs also responded to public concerns expressed directly and in the media, of course, and I believe that consumers’ passive reliance on government authorities for protection is in error.

Ted Dziuba has penned a humorous and sharp-tongued piece for The Register about last week’s Adblock vs. NoScript fiasco.  For those of you who aren’t Firefox junkies, a nasty public spat broke out between the makers of these two very popular Firefox Browser extensions (they are the #1 and #3 most popular downloads respectively).  To make a long and complicated story much shorter, basically, NoScript didn’t like Adblock placing them on their list of blacklisted sites and so they fought back by tinkering with the NoScript code to evade the prohibition.  Adblock responded by further tinkering with their code to circumvent the circumvention!  And then, as they say, words were exchanged.

Thus, a war of words and code took place.  In the end, however, it had a (generally) happy ending with NoScript backing down and apologizing. Regardless, Mr. Dzuiba doesn’t like the way things played out:

The real cause of this dispute is something I like to call Nerd Law.  Nerd Law is some policy that can only be enforced by a piece of code, a public standard, or terms of service. For example, under no circumstances will a police officer throw you to the ground and introduce you to his friend the Tazer if you crawl a website and disrespect the robots.txt file.

The only way to adjudicate Nerd Law is to write about a transgression on your blog and hope that it gets to the front page of Digg. Nerd Law is the result of the pathological introversion software engineers carry around with them, being too afraid of confrontation after that one time in high school when you stood up to a jock and ended up getting your ass kicked.

Dziuba goes on to suggest that “If you actually talk to people, network, and make agreements, you’ll find that most are reasonable” and, therefore, this confrontation and resulting public fight could have been avoided. They “could have come to a mutually-agreeable solution,” he says.

But no. Sadly, software engineers will do what they were raised to do. And while it may be a really big hullabaloo to a very small subset of people who Twitter and blog their every thought as if anybody cared, to the rest of us, it just reaffirms our knowledge that it’s easy to exploit your average introvert.  After all, what’s he gonna do? Blog about it?

OK, so maybe the developers could have come to some sort of an agreement if they had opened direct channels of communications or, better yet, if someone at the Mozilla Foundation could have intervened early on and mediated the dispute.  At the end of the day, however, that did not happen and a public “Nerd War”  ensued.  But I’d like to say a word in defense of Nerd Law and public fights about “a piece of code, a public standard, or terms of service.”

Continue reading →

Over at the Verizon Policy Blog, Link Hoewing has a sharp piece up entitled, “Of Business Models and Innovation.” He makes a point that I have often stressed in my debates with Zittrain and Lessig, namely, that the whole “open vs. closed” debate is typically greatly overstated or misunderstood.   Hoewing correctly argues that:

The point is not that open or managed models are always better or worse.  The point is that there is no one “right” model for promoting innovation.  There are examples of managed and open business models that have been both good for innovation and bad for it. There are also examples of managed and open models that have both succeeded and failed.  The point is in a competitive market to let companies develop business models they believe will serve consumers best and see how things play out.

Exactly right.  Moreover, the really important point here is that there exists a diverse spectrum of innovative digital alternatives from which to choose. Along the “open vs. closed” spectrum, the range of digital technologies and business models continues to grow and grow in both directions.  Do you want wide-open, tinker-friendly devices, sites, or software? You got it. Do you want a more closed, simple, and safe online experience?  You can have that, too.  And there are plenty of choices in between.

This is called progress!

Here’s a terrific piece by Harry McCracken over at Technologizer asking “Whatever Happened to the Top 15 Web Properties of April, 1999?”  McCracken goes through the hottest web properties of April 1999 and asks, “How many of 1999’s Web giants remain gigantic today — assuming they still exist at all?”  Instead of reproducing his entire list here, I’ll just encourage you to go over to Technologizer and check it out for yourself, especially because McCracken also compares the old list to today’s top 15 Web properties.  Anyway, here’s the key takeaway from his piece:

to summarize, four of April 1999’s top Web properties remain in the top fifteen (plus AltaVista, Excite, and GeoCities, which are extant and part of top-10 properties). Four more are in the top 50, or are part of properties that are. Two exist but have fallen out of the top 50. And two (Xoom and Snap) no longer exist. Bottom line: If you were one of the Web’s biggest properties a decade ago, chances are high that you remain in business in some form in 2009… but you probably aren’t still a giant.

In other words, it’s a dynamic marketplace with a lot of churn and creative destruction. Sure, some big dogs from the late 90s remain (Microsoft, AOL, Yahoo, and CNet). But they have all been humbled to some extent.  Moreover, lots and lots of other players were driven from the top ranks or disappeared altogether. (GeoCities, Lycos, Excite, AltaVista, Xoom, Snap).  And there have been new technologies, platforms, and players that have come out of nowhere in a very short time to become the household names of 2009 (Google, Facebook, MySpace, Wikipedia).  But, as McCracken points out, it’s anyone’s guess which of today’s top Web properties will still be booming in 2019.   Anyway, I encourage you to check out McCracken’s very interesting essay, and if you find this sort of restrospective piece interesting, you might also want to check out my essay from earlier this year, “10 Years Ago Today… Thinking About Technological Progress“.

The Library of Congress now has a YouTube channel. Among the gems you can find there, the first moving image ever made. It’s a man named Fred Ott, sneezing:

On the problems with the newspaper industry, Michael Kinsley writes in the Washington Post:

You may love the morning ritual of the paper and coffee, as I do, but do you seriously think that this deserves a subsidy? Sorry, but people who have grown up around computers find reading the news on paper just as annoying as you find reading it on a screen. (All that ink on your hands and clothes.) If your concern is grander – that if we don’t save traditional newspapers we will lose information vital to democracy – you are saying that people should get this information whether or not they want it. That’s an unattractive argument: shoving information down people’s throats in the name of democracy.

I rarely say it, but the whole thing is worth reading.

Yochai Benkler ponders the death of the newspaper:

Critics of online media raise concerns about the ease with which gossip and unsubstantiated claims can be propagated on the Net. However, on the Net we have all learned to read with a grain of salt between our teeth, like Russians drinking tea through a sugar cube. The traditional media, to the contrary, commanded respect and imposed authority. It was precisely this respect and authority that made The New York Times’ reporting on weapons of mass destruction in Iraq so instrumental in legitimating the lies that the Bush administration used to lead this country to war.

This is a fantastic insight, and indeed, it’s precisely the insight that we libertarians apply to the regulatory state. That is, just as a decentralized media and a skeptical public is better than the cathedral style of news gathering, so too are decentralized certification schemes and a skeptical public better than a single, cathedral-style regulatory agency “guaranteeing” that businesses are serving consumers well. Most of the time the regulators will protect the public, just as most of the time newspapers get their stories right. The problem is that no institution is perfect, and the consequences of failure are much more serious if you’ve got a population that’s gotten used to blindly trusting the authority figure rather than exercising healthy skepticism. Regulatory agencies are single points of failure, and in a complex economy single points of failure are a recipe for disaster.

Will Wilkinson makes the related point that journalists are prone to journalistic capture that’s very much akin to the regulatory capture that plagues government agencies.

Worried that decentralized news-gathering sources won’t be able to do the job the monolithic newspapers are leaving behind? Jesse Walker has a great piece cataloging the many ways that stories can get from obscure city council meetings to popular attention.

Theories constitute the technology of academia. They give us eggheads the tools we need to get our work done, just as computers serve programmers and DNA sequencing serves bioengineers. I trust that TLF’s readers won’t think me too far off-topic, then, if I cite a new approach to consent theory, something that should interest anyone who cares about the fundamental reasons for valuing of liberty. Here’s a snapshot of the theory:

Figure 3:  The Relationship Between Consent and Justification

To get the full story, please see that figure’s source: Graduated Consent Theory, Explained and Applied, Chapman University School of Law, Legal Studies Research Paper Series, Paper No. 09-13 (March 2009) [PDF]. The paper reviews the importance of consent in legal, moral, and economic reasoning, and develops a model of the relationship between consent and justification. It concludes by applying that model to a number of practical problems. Most notably, in contrast to both originalism and “living constitutionism,” the paper promotes interpreting the Constitution according to the plain, present, public meaning of its text and resolving ambiguities in favor of individual liberty.