William Powers, a writer who has been a columnist and media critic for such publications as The Washington Post, The New Republic, and National Journal, discusses his new book, Hamlet’s BlackBerry: A Practical Philosophy for Building a Good Life in the Digital Age. In the book, Powers writes, “You can allow yourself to be led around by technology, or you can take control of your consciousness and thereby your life.” On the podcast, he discusses historical philosophers’ ideas that can offer shelter from our present deluge of connectedness, how to create gaps that allow for currently elusive depth and inward reflection, and strategies that help him and his family regain control over their technology.
Related Links
To keep the conversation around this episode in one place, we’d like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?
I’ve grown increasingly tired of the fight over not just retransmission consent, but almost all TV regulation in general. Seriously, why is our government still spending time fretting over a market that is more competitive, diverse and fragmented than most other economic sectors? It’s almost impossible to keep track of all the innovation happening on this front, although I’ve tried here before. Every metric — every single one — is not just improving but exploding. Just what’s happening on the kids’ TV front is amazing enough, but the same story is playing out across other programming genres and across multiple distribution platforms.
More proof of just how much more diverse and fragmented content and audiences are today comes in this excellent new guest editorial over at GigaOm, “The Golden Age of Choice and Cannibalization in TV,” by Mike Hudack, CEO of Blip.tv. Hudack notes that, compared to the Scarcity Era, when we had fewer choices and were all forced to watch pretty much the same thing, today’s media cornucopia is overflowing, and audiences are splintering as a result. “Media naturally trends towards fragmentation,” he notes. “As capacity increases so does choice. As choice increases audiences fragment. When given a choice people generally prefer media that speaks to them as individuals over media that speaks to the ‘masses.’”
Indeed, he cites Nielsen numbers I’ve used here before illustrating how the top shows of the 50’s (like Texaco Star Theater) netted an astonishing 60-80% of U.S. television households while more recent hits, like American Idol is lucky if it can manage over 15% audience share. He concludes, therefore, that: Continue reading →
In the current issue of *Foreign Affairs*, Deputy Defense Secretary William J. Lynn III, has one of the [more sober arguments](http://www.foreignaffairs.com/articles/66552/william-j-lynn-iii/defending-a-new-domain) for government involvement in cybersecurity. Naturally, his focus is on military security and the Pentagon’s efforts to protect the .mil domain and military networks. He does, however, raise the question of whether and how much the military should be involved in protecting civilian networks.
One thing that struck me about Lynn’s article is the wholesale rejection of a Cold War metaphor for cybersecurity. “[The United States] must also recognize that traditional Cold War deterrence models of assured retaliation do not apply to cyberspace, where it is difficult and time consuming to identify an attack’s perpetrator,” he writes. Given the fact that attribution is nearly impossible on the internet, he suggests that the better strategy would be “denying any benefits to attackers [rather] than imposing costs through retaliation.”
What’s interesting about this is that it is in utter contrast to the recommendations of cybersecurity enthusiasts like former NSA chief Michael McConnell, who wrote earlier this year in a [1,400-word op-ed](http://www.washingtonpost.com/wp-dyn/content/article/2010/02/25/AR2010022502493.html) in the *Washington Post*:
Continue reading →
Tim Wu’s new book, The Master Switch: The Rise and Fall of Information Empires, will be released next week and it promises to make quite a splash in cyberlaw circles. It will almost certainly go down as one of the most important info-tech policy books of 2010 and will probably win the top slot in my next end-of-year list.
Of course, that doesn’t mean I agree with everything in it. In fact, I disagree vehemently with Wu’s general worldview and recommendations, and even much of his retelling of the history of information sectors and policy. Nonetheless, for reasons I will discuss in this first of many critiques, the book’s impact will be significant because Wu is a rock star in this academic arena as well as a committed activist in his role as chair of the radical regulatory activist group, Free Press. Through his work at Free Press as well as the New America Foundation, Professor Wu is attempting to craft a plan of action to reshape the Internet and cyberspace.
I stand in opposition to almost everything that Wu and those groups stand for, thus, I will be spending quite a bit of time addressing his perspectives and proposals here in coming months, just as I did when Jonathan Zittrain’s hugely important The Future of the Internet & How to Stop It was released two years ago (my first review is here and my latest critique is here). In today’s essay, I’ll provide a general overview and foreshadow my critiques to come. (Note: Tim was kind enough to have his publisher send me an advance uncorrected proof of the book a few months ago, so I’ll be using that version to construct these critiques. Please consult the final version for cited material and page numbers.) Continue reading →
(Second in a series.)
I recently picked up a copy of Robert Wuthnow’s Be Very Afraid: The Cultural Response to Terror, Pandemics, Environmental Devastation, Nuclear Annihilation, and Other Threats. According to the dust cover, the Princeton sociologist’s book “examines the human response to existential threats…” Contrary to common belief, we do not deny such threats but “seek ways of positively meeting the threat, of doing something—anything—even if it’s wasteful and time-consuming.” Interesting batch of ideas, no?
Well, the fifth paragraph of the book joins up with some pretty obscure and disorganized writing in the introduction to disqualify it from absorbing any more of my precious time. That paragraph contains this sentence: “Millions could die from a pandemic or a dirty bomb strategically placed in a metropolitan area.”
It’s probably true that millions could die from a pandemic. Two million deaths would be just under 0.03% of the world’s population—not quite existential. But the killer for the book is Wuthnow saying that millions could die from a dirty bomb placed in a metropolitan area. There will never be that many deaths from a dirty bomb, placed anywhere, ever.
One suspects that the author doesn’t know what a dirty bomb is. A dirty bomb is a combination of conventional explosives and radioactive material that is designed to disperse the radioactive material over a wide area. A dirty bomb is not a nuclear explosive and its lethality is little greater than a conventional weapon, as the radiological material is likely to be too dispersed and too weak to cause serious health issues.
Dirty bombs are meant to scare. Incautious discussion of dirty bombs has induced more fright in our society than any actual bomb. Professor Wuthnow asserts, as fact, that a dirty bomb could kill millions, which is plainly wrong. If he doesn’t know his subject matter, he doesn’t get any more time from this reader.
Given my brief experience with the book, I advise you to be very afraid of Be Very Afraid.
An important anniversary just passed with little more notice than an email newsletter about the report that played a pivotal role in causing the courts to strike down the 1998 Child Online Protection Act (COPA) as an unconstitutional restriction on the speech of adults and website operators. (COPA required all commercial distributors of “material harmful to minors” to restrict their sites from access by minors, such as by requiring a credit card for age verification.)
The Congressional Internet Caucus Advisory Committee is pleased to report that even after 10 years of its release the COPA Commission’s final report to Congress is still being downloaded at an astounding rate – between 700 and 1,000 copies a month. Users from all over the world are downloading the report from the COPA Commission, a congressionally appointed panel mandated by the Child Online Protection Act. The primary purpose of the Commission was to “identify technological or other methods that will help reduce access by minors to material that is harmful to minors on the Internet.” The Commission released its final report to Congress on Friday, October 20, 2000.
As a public service the Congressional Internet Caucus Advisory Committee agreed to virtually host the deliberations of the COPA Commission on the Web site COPACommission.org. The final posting to the site was the actual COPA Commission final report making it available for download. In the subsequent 10 years it is estimated that close to 150,000 copies of the report have been downloaded.
The COPA Report played a critical role in fending off efforts to regulate the Internet in the name of “protecting our children,” and marked a shift towards focusing on what, in First Amendment caselaw is called “less restrictive” alternatives to regulation. This summary of the report’s recommendations bears repeating:
After consideration of the record, the Commission concludes that the most effective current means of protecting children from content on the Internet harmful to minors include: aggressive efforts toward public education, consumer empowerment, increased resources for enforcement of existing laws, and greater use of existing technologies. Witness after witness testified that protection of children online requires more education, more technologies, heightened public awareness of existing technologies and better enforcement of existing laws. Continue reading →
It’s wonderful to see that the FCC is putting spectrum front and center on its agenda. Yesterday it held a spectrum “summit” at which it released several [papers](https://docs.google.com/viewer?url=http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-302330A1.pdf) looking at the challenges and opportunities mobile broadband faces, and it was [announced](http://techdailydose.nationaljournal.com/2010/10/spectrum-on-november-fcc-agend.php) that at its November meeting, the chairman will introduce several items related to spectrum reallocation. NTIA is keeping pace, [identifying over 100 MHz](http://techdailydose.nationaljournal.com/2010/10/ntia-identifies-federal-spectr.php) now in federal hands (mostly DoD) to be moved over for commercial use.
The consensus that has led us to this happy time is that there is a spectrum “shortage” or spectrum “crunch,” as many said yesterday. Here’s how Chairman Genachowski [explained it](https://docs.google.com/viewer?url=http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-302331A1.pdf&pli=1):
>The explosive growth in mobile communications is outpacing our ability to keep up. If we don’t act to update our spectrum policies for the 21st century, we’re going to run into a wall—a spectrum crunch—that will stifle American innovation and economic growth and cost us the opportunity to lead the world in mobile communications.
>Spectrum is finite. Demand will soon outpace the supply available for mobile broadband.
Every natural resource is finite, however. So how exactly did we end up with this “spectrum crunch”? Continue reading →
The Federal Communications Commission has established a new advisory group called the “Technological Advisory Council.” Among other things it will advise the agency on “how broadband communications can be part of the solution for the delivery and cost containment of health care, for energy and environmental conservation, for education innovation and in the creation of jobs.”
This is an agency that is radically overspilling its bounds. It has established goals that it has no proper role in fulfilling and that it has no idea how to fulfill. As we look for cost-cutting measures at the federal level, we could end the pretense that communications industry should be regulated as a public utility. Shuttering the FCC would free up funds for better purposes such as lowering the national debt or reducing taxes.
As we enter day 5 of the standoff between Cablevision and News Corp. over the retransmission of local Fox stations, the controversy over a supposed net neutrality violation has died down, but pressure on the FCC to interfere with the parties’ negotiations [is mounting](http://voices.washingtonpost.com/posttech/2010/10/pressure_mounting_for_fcc_to_i.html). Sen. Kerry has also released a [draft bill](http://techliberation.com/wp-content/uploads/2010/10/kerry-retrans-bill.pdf) [PDF] that would reform the Cable Act’s retransmission consent rules to force TV stations to accept FCC mediation and allow carriage of their signals during a contract dispute.
It’s almost ironic that some would call for more FCC interference to solve a problem that is at least partly caused by FCC regulation. Cablevision is in New York, and what it wants is to carry Fox programming. The local Fox stations, owned and operated by News Corp., are demanding what Cablevision considers too high a price. So why wouldn’t Cablevision just turn to a Fox affiliate in Michigan for the content? The answer is that FCC regulations authorized by the Cable Act take that excellent bargaining chip away from video providers. Continue reading →
I’ve been looking into the cybersecurity issue lately, and I finally took the time to do an in-depth read of the [Securing Cyberspace for the 44th Presidency](http://csis.org/publication/securing-cyberspace-44th-presidency) report, which is frequently cited as one of the soundest analyses of the issue. It was written by something of a self-appointed presidential transition commission called the “Commission on Cybersecurity for the 44th President,” chaired by two congressmen and with a membership of notables from the IT industry, defense contractors, and academia, and sponsored by CSIS.
What I was struck by is the complete lack of any verifiable evidence to support the report’s claim that “cybersecurity is now a major national security problem for the United states[.]” While it offers many assertions about the sorry state of security in government and private networks, the report advances no reviewable evidence to explain the scope or probability of the supposed threat. The implication seems to be that the authors are working from classified sources, but the “if you only knew what we know” argument from authority didn’t work out for us in the run up to the Iraq war, and we should be wary of it now.
Continue reading →