What We’re Reading

[Note: I updated this discussion and chart in a subsequent essay. See: “Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society.”]

A number of very interesting books have been released over the past year or two which debate how the Internet is reshaping our culture and the economy. I’ve reviewed a couple of them here but I have been waiting to compile a sort of mega-book review once I found a sensible way to conceptually group them together. I’m not going to have time to cover each of them here in the detail they deserve, but I think I have at least found a sensible way to categorize them. For lack of better descriptors, I’ve divided these books and thinkers into two camps: “Internet optimists” versus “Internet Pessimists.” Here’s a list of some of the individuals and books (or other articles and blogs) that I believe epitomize these two camps of thinking:

Adherents & Their Books / Writings

Internet Optimists

Internet Pessimists

Yochai Benkler, The Wealth of Networks

Andrew Keen, The Cult of the Amateur

Chris Anderson, The Long Tail and “Free!”

Lee Siegel, Against the Machine

Clay Shirky, Here Comes Everybody

Nick Carr, The Big Switch

Cass Sunstein, Infotopia

Cass Sunstein, Republic.com

Don Tapscott, Wikinomics

Todd Gitlin, Media Unlimited

Kevin Kelly & Wired mag in general

Alex Iskold, “The Danger of Free

Mike Masnick & TechDirt blog

Mark Cuban

And here’s a rough sketch of the major beliefs or key themes that separate these two schools of thinking about the impact of the Internet on our culture and economy:

Beliefs / Themes

Internet Optimists

Internet Pessimists

Culture / Social

Net is Participatory

Net is Polarizing

Net yields Personalization

Net yields Fragmentation

a “Global village

Balkanization

Heterogeneity / Diversity of Thought

Homogeneity / Close-mindedness

Net breeds pro-democratic tendencies

Net breeds anti-democratic tendencies

Tool of liberation & empowerment

Tool of frequent misuse & abuse

Economics / Business

Benefits of “free” (“Free” = future of media / business)

Costs of “free” (“Free” = end of media / business)

Increasing importance of “Gift economy

Continuing importance of property rights, profits, firms

“Wiki” model = wisdom of crowds; power of collective intelligence

“Wiki” model = stupidity of crowds; errors of collective intelligence

Mass collaboration

Individual effort

So, what to make of this intellectual war? Who’s got the story right?

Continue reading →

Read Recently: The Marriage of Elizabeth Barrett Browning and Robert Browning. A remarkable and very non-technological story.

Also: Most of John Dupre’s book Human Nature and the Limits of Science. This turns out to be a critique of two models of human nature, one derived from evolutionary biology and evolutionary psychology, and the other derived from economics. Dupre favors a view of human nature more closely linked to culture, acknowledging the value of diversity. This is a topic well worth writing about; unfortunately, this particular book would have benefited from a vigorous pre-publication critique. Reading it is a lot like having a very frustrating dinner conversation with Dupre, in which interesting arguments are stumbled over, explained only partly, and then abandoned. Continue reading →

Stephen Schultze is an up-and-coming technology policy analyst who is a fellow at the Berkman Center for Internet and Society at Harvard University. He is also finishing up his Masters of Science in Comparative Media Studies up at MIT. He’s been kind enough to stop by here at the TLF on occasion and comment on some of the things we have written — usually to give us grief, but we welcome that too! He’s very sharp and always has something of substance to say, and he says it in a respectful way. So I look forward to many years of intellectual combat with him. (Incidentally, we also share a mutual admiration for the work of Ithiel de Sola Pool, especially his 1983 classic, “Technologies of Freedom: On Free Speech in an Electronic Age , which I have noted is my favorite tech policy book of all-time.]

Anyway, Stephen has just posted his master’s thesis: “The Business of Broadband and the Public Interest: Media Policy for the Network Society.” It’s a noble attempt to defend and extend the “public interest” concept in the Digital Age. Stephen attempts to “identify the several dimensions in which it remains relevant today.” In his thesis, Stephen cites some of my past work on the issue since I have articulated a very different view on the issue. Specifically, he cites a line of mine that I have used in multiple studies and essays on the issue:

“The public interest standard is not really a “standard” at all since it has no fixed meaning; the definition of the phrase has shifted with the political winds to suit the whims of those in power at any given time.”

I stand by that quote and down below I have pasted a lengthy passage on the mythology surrounding the public interest standard, which I pulled directly from my old 2005 “Media Myths” book. It explains in more detail why I feel that way.

“Right now is a critical point of media in transition that will affect the shape communications ecosystem going forward,” Stephen states in his thesis. I couldn’t agree more, but I completely disagree that that somehow justifies breathing new life into a standard-less standard that justifies open-ended, arbitrary governance of the Internet and digital media. Read on to understand why I feel that way…
Continue reading →

So, the new iPhone OS was cracked in mere hours. According to the folks at Gizmodo:

The new iPhone OS 2.0 software has been unlocked and jailbroken. It was released just hours ago and it has already been cracked by the iPhone Dev Team. The first one took a couple of months, but this one was actually unlocked before Apple released it to the public. … Now that the official iPhone OS 2.0 is out, the iPhone Dev Team will release their Pwnage tool for everyone to unlock and jailbreak their iPhones soon.

Shocker, right? Well, anyway, I found this funny because back in March I gave Jonathan Zittrain a lot of grief for making the iPhone out to be some sort of enemy of the people because of its closed, proprietary nature. In his provocative new book “The End of the Internet,” he suggested that iPhone typified a dangerous new emerging business model that would destroy the “generative” nature of the Net by pushing people into closed systems.

My response was basically that Jonathan was making a mountain out of a molehill. Generative technologies weren’t going anywhere, and the Net certainly wasn’t “dying.” Not only is generativity thriving, but there’s just no way to stop people from hacking away at closed devices and networks, as today’s cracking of the iPhone in mere hours proves once again.

So, Jonathan, I hate to pick on you again buddy, but what exactly is the problem? Apple has put another great device on the market and people immediately took steps to open it up and see if they can make it even better. Sounds like progress to me.

The Zittrain thesis is just getting harder and harder for me to take seriously.

From pp. 141-143 of The FBI and American Democracy:

For inexplicable reasons, [John Malone, head of the FBI’s New York Field Office in the 1960s], had not complied with the record destruction requirements of the Do Not File procedure. His failure to do so preserved a massive file (amounting to twenty-seven volumes) that documented the number and targets of break-ins conducted by New York agents, identified the agents participating, and contained the specific records of the targeted individuals or organizations that agents had photographed…

Because the Malone file confirmed that, in 1972 and 1973, New York agents had conducted break-ins during an investigation of the Weather Underground activists, a practice that fell within the five-year statute of limitations, Justice Department officials accordingly instituted a criminal inquiry that led to the indictment of John Kearney, the FBI supervisor who headed the New York break-in squad (identifiable from the Malone records). FBI agents nationwide bitterly criticized Kearney’s indictment, protesting that he had been following worders. Further investigation led to the May 1977 discovery of thirteen break-in authorization memoranda at FBI headquarters. Consequently, in April 1978, Justice Department officials dropped the Kearney incitment and indicted, instead, former Acting FBI Director L. Patrick Gray III, former FBI Associate Director W. Mark Felt [AKA “Deep Throat”], and former FBI Assistant Director Edward Miller for having authorized illegal practices. Gray subsequently succeeded in having his trial severed from that of Felt and Miller, arguing that he had been misled and had no knowledge of the Weather Underground break-ins. Conceding the weakness of their case against Gray, Justice Department officials dropped the crimnal charges against him in December 1980. Felt and Miller were convicted. But President Ronald Reagan pardoned them on March 26, 1981, on the grounds that “they acted not with criminal intent, but in the belief that they had grants of authority reaching to the highest levels of government.”

Is it too much to hope that history repeats itself when President Obama takes office? Minus the pardon, preferably.

The FBI and Politics

by on July 8, 2008 · 13 comments

More fun stuff from page 100 of the Theoharis boo:

FBI officials were interested in the sexual indiscretions of elected memebrs of Congress. FBI agents were specifically encouraged to report and record any such discoveries and to do so discreetly. During an interview with the so-called Pike Committee in 1975, a former FBI agent described this practice. Puzzled over why such information was being collected, the agent claimed to have consulted his boss, FBI Assistant Director Cartha DeLoach. He then recounted DeLoach’s response: “The other night we picked up a situation where the Senator was seen drunk, in a hit-and-run accident, and some good-looking broad was with him. He [DeLoach] said, ‘We got the information, reported it in a memorandum’ and DeLoach—and this is an exact quote—he said ‘by noon of the next day the good Senator was aware that we had the information and we never had any trouble with him on appropriations since.'”

Now, I have no evidence that today’s NSA or FBI is doing anything like this. But of course, someone in the 1960s wouldn’t have realized what the FBI was doing then, either. We certainly shouldn’t be passing legislation making this sort of thing easier to pull off and harder to uncover.

I’m boning up on the history of the FBI, reading Athan Theoharis’s The FBI and American Democracy. So far, I’ve gotten from the FBI’s inception (100 years ago this month) to midcentury. The most remarkable thing about it is how familiar it all seems. As Theoharis tells the story, the FBI has, from its inception, pushed for ever broader authority to spy on Americans. During the first half of the 20th century, it pushed relentlessly for broader statutory authority. When Congress would not give it the authority it wanted, it sought authorization from senior executive branch officials for authorization to break the law. If authorization wasn’t fortcoming, the bureau would often do what it wanted anyway and not tell its nominal superiors of its activities.

A few illustrative anecdotes:

  • In 1937 and 1939, the Supreme Court ruled that wiretapping was illegal under the 1934 Communications Act. President Roosevelt responded in 1940 with a “secret directive authorizing FBI wiretaps during ‘national defense’ investigations. The president privately reasoned that the Court’s rulings governed only criminal cases.” Roosevelt required the FBI to seek specific authorization from the attorney general for each wiretap, but the FBI found this requirement too onerous, and “installed wiretaps without the attorney general’s advance approval” on at least 17 occasions.
  • In 1954, the Supreme Court held that trespassing in order to install bugs violated the Fourth Amendment. The FBI asked the attorney general for authorization to ignore the ruling and continue illegally bugging peoples’ homes, but the attorney general sought plausible deniability, writing that he “would be in a much better position to defend the Bureau in the event that there should be a technical trespass if he had not heretofore approved it.” The FBI continued bugging, without bothering its nominal superiors with the details.
    Continue reading →
  • MIT’s Technology Review has a great review of a new biography of Georges Doriot (Wikipedia) by Businessweek Editor Spencer E. Ante entitled, Creative Capital: Georges Doriot and the Birth of Venture Capital.  Born in France, Doriot fought in World War I, then studied at Harvard Business School, served as director of the U.S. military’s Military Planning Division during World War II as a brigadier general, and in 1946 launched American Research and Development Corporation (ARD) as the first publicly owned venture capital firm.

    Doriot’s legacy looms large today, even if his name is new to most:

    Contemporaneously with ARD’s watershed investment in [Digital Equipment Corporation], others began walking the trails Doriot had blazed: Arthur Rock (a student of Doriot’s in the Harvard class of 1951) backed the departure of the “Traitorous Eight” from Shockley Semiconductor to form ­Fairchild Semiconductor in 1957, then funded ­Robert Noyce and ­Gordon Moore when they left ­Fairchild to found Intel; ­Laurance ­Rockefeller formed ­Venrock, which has since backed more than 400 companies, including Intel and Apple; Don ­Valentine formed Sequoia Capital, which would invest in Atari, Apple, Oracle, Cisco, Google, and YouTube.

    Doriot himself would likely have felt at home among today’s embattled and outnumbered regulation-skeptics in the technology policy community:

    he opposed both the dirigiste political economy of his native France and the tax hikes and anticompetitive laws enacted in the United States under the New Deal. Such regulations, he maintained, arrogated to bureaucrats the function of the markets; their worst feature was that they let government lend money to failing businesses. Ante notes that a former colleague of Doriot’s, James F. Morgan, recalled him as “the most schizophrenic Frenchman I’ve ever met”–devoted to his original land’s wine, cuisine, and language even as “the French capacity to make very simple things complicated drove him nuts.”

    Continue reading →

    I have in past years learned a great deal from reading John Calfee’s book “Fear of Persuasion,” on the consumer benefits of advertising. Now he is writing on drug development in “The Indispensable Industry,”

    http://www.american.com/archive/2008/may-june-magazine-contents/the-indispensable-industry

    He considers, one after another, various proposals to fund drug development using public funds, prizes, or other plans. He writes:

    There are two problems with government and nonprofit R&D as a substitute for the traditional for-profit industry. One lies in what the nonprofit sector has not tried to do; the other lies in what it has tried to do. 

    We have to remember that no laws, regulations, or traditions have prevented the public research system from inventing the drugs we need if it was really capable of doing that and no one else was. In principle, publicly funded drug research can run all the way from basic research through clinical trials to FDA approval and, if the believers in this approach are correct, it can be conducted at reasonable costs including the inevitable losses from drilling dry holes. 

    But let’s look at the record. If we really had a reliably productive government-nonprofit drug development system, we should have seen its fruits by now. Those fruits would have arrived in such areas as the testing of off-patent drugs with great potential and the creation of new drugs where profit incentives are inherently weak because of inadequate intellectual property laws. We should have seen, for example, clinical demonstrations of aspirin for heart disease and cancer much faster than actually occurred…

    The piece is well worth reading in its entirety.

    So I finally had a chance to read Beth Simone Noveck’s article on wiki-government about which Jim has previously posted. The idea is to take tools of mass collaboration that have given us Wikipedia and Linux and apply them to the development of policy. Like the encyclopedias and operating systems of the past, policy development is now often the exclusive domain of government experts. Noveck coins the term “civic software” to refer to collaboration tools aimed at policy.

    While I’m a fan of the power of crowds (see my recent paper on “crowdsourcing government transparency”) I’d like to take issue with one minor point of her plan. She critiques our current system of experts saying, “Sometimes these pre-selected scientists and outside experts are simply lobbyists passing by another name.” (I’d change “sometimes” to often.) The implication is that a mass collaborative process might help limit the influence of special interests in policy-making. How? The wiki-wonks, Noveck suggests, won’t be limited to appointed pros:

    People have no option to self-select on the basis of enthusiasm, rather than being chosen on the basis of profession. Even when not unduly subject to political influence, the decision as to who participates is based on institutional status. Those who may have meaningful contributions to make–graduate students, independent scientists, avid hobbyists, retired executives, and other consultants in the “free agent nation”–fall outside the boundaries of professional institutions and status and will of necessity be excluded, regardless of their intellectual assets.

    But isn’t that what lobbies are? The most enthusiastic on a given issue? The same way ornithologists or passionate bird watchers are the ones writing the Wikipedia entries about robbins, it seems to me that special interests will be the most active in shaping any wiki-policy. As Hillary Clinton likes to say, lobbyists “represent real Americans.” I don’t think wiki-government can meaningfully diminish special interest influence.

    That said, Noveck’s Peer-to-Patent pilot program with the USPTO is an excellent idea. I especially like how the community chooses what gets sent to the Patent Office:

    The community not only submits information, but it also annotates and comments on the publications, explaining how the prior art is relevant to the claims of the patent application. The community rates the submitted prior art and decides whether or not it deserves to be shared with the USPTO. Only the 10 best submitted prior-art references, as judged on the basis of their relevance to the claims of the patent applications by the online review community, will be forwarded to the patent examiner.

    I’d love to see something like this for regulations. There’s no reason why we must wait for a government pilot program to do this. Maybe we can set up a wiki where the community can collaborate on a comment on a proposed agency rulemaking and the finished product is submitted to the docket. There’s no such thing as a neutral point of view when it comes to policy, so the wiki would have to have some first principles the community agrees to, or maybe a mechanism for developing several opposing comments. Thoughts?