TLF’s Adam Thierer yesterday posted about the “Other America” — the part that just doesn’t give a hoot about broadband. But get ready for another shocker: there are also some that don’t care about over-the-air television.
This was pointed out by the ever-quotable Gary Shapiro — chief of the Consumers Electronics Association — at a DC policy forum yesterday. Citing a CEA survey on how people will handle the DTV transition, he argued that consumers would make informed decisions about the DTV transition, with some buying new sets, some getting converter boxes.
“Others”, he added, “frankly, don’t care. You know, not everyone really wants free over-the-air broadcasting in their home,” Shapiro said. Its not just that 85 percent of viewers have cable or satellite service. Quite a few are quite happy with video games and DVDs, he explained. (according to Communications Daily).
Leave it to Shapiro to point out that the Emperor has no rabbit ears. In Washington circles, over-the-air TV is treated like a basic human need, like air itself. For weeks now, policymakers have been in a tizzy over the potential public reaction when analog signals are turned off in February 2009. (With the NAB even fretting over “disenfranchised” television sets.)
Certainly some people will care when the transition takes place — but the reaction will likely be less than the DC echo-chamber expects.
“There is fear-mongering going on and, frankly, this has become a political issue,” Shapiro said. “It is easy to go to government and say, ‘We need more money for something’. But the question is, is it really needed?”
It may be time to stop pushing that DTV panic button. And to put down that shovelful of money.
I testified in Congress yesterday, at a hearing on the REAL ID Act in the Senate Homeland Security and Governmental Affairs Committee’s Subcommittee on Oversight of Government Management, the Federal Workforce, and the District of Columbia. My testimony is here.
An issue that I sought to highlight comes from studying the REAL ID regulations carefully: The standard that the Department of Homeland Security selected for the 2D bar code that would go on REAL ID compliant cards includes race/ethnicity as one of the data elements.
DHS does not specifically require inclusion of this information, but states are likely to adopt the entire standard. Thus, starting in May 2008, many Americans may be carrying nationally uniform cards that include race or ethnicity in machine-readable formats – available for scanning and collection by anyone with a bar code reader. Government agencies and corporations may affiliate racial and ethnic data more closely than ever with information about our travels through the economy and society.
This was not intended by the authors of the REAL ID Act, nor was it intended by the regulation writers at the Department of Homeland Security. The Belgian colonial government in 1930s Rwanda had no intention to facilitate the 1994 genocide in that country either, but its inclusion of group identity in ID cards had that result all the same.
The woman in the image below, believed to be a genocide victim, is categorized as a Tutsi just below her photograph. Her name is not seen, as it appears on the first page of this folio-style ID document. The names of her four children, though, are written in on the page opposite the photo.
The lessons of history are available to us. The chance of something like this happening in the United States is blessedly small, but it is worth taking every possible step to avoid this risk, given an always-uncertain future. In a society that strives for a color-blind ideal, the federal government should have no part in creating a system that could be used to track people based on race.
Cross-posted from Cato@Liberty
From Greg Costikyan of Manifesto Games, on DRM and information. From the “creators are getting screwed” genre, but makes some interesting points regarding the need of creators for DRM even if the intermediaries (distributors, marketers) are taken out of the game.
The “creators are getting screwed” genre is a subset of the larger argument that takes the form of “__A___ is doing it all wrong, they just need to __B__,” where A=some person with substantial experience in an immensely complex field (a football coach or the entire recording industry, for example) and B=some simple alternative arrangement that comes to mind in casual conversation. It is usually not nearly that simple, as I suspect Greg has the savvy to recognize (particularly since he notes that his own gameco that is more generous with artists, is operating in the red). Dollars do not lie around on sidewalks for long.
Mike Masnick warns that the future of VoIP is in jeopardy:
You have to wonder how many times fans of the patent system have to repeat the mantra that “patents encourage innovation” before they can actually believe it. There continues to be new evidence on nearly a daily basis of patents doing the exact opposite that it’s hard to believe the patent system retains as many supporters as it does. The latest is that a ton of patent holders are preparing to sue over various VoIP-related patents, following the news of Verizon’s big win over Vonage for VoIP patents. The problem, of course, is that tons of companies (some big, some small) all claim patents on various aspects of VoIP — creating the very definition of the “patent thicket.” That is, there are so many patents around the very concept of VoIP that no one company can actually afford to offer a VoIP service, since the cost to license all the patents is simply too prohibitive. Expect plenty more lawsuits in the near future as this all comes out in court. The big players will use their patents to keep out competition, and the small players will use the patents to try to create an NTP-style lottery ticket. The lawyers will all win — but consumers who just want to use VoIP will lose big time. What’s wrong with letting companies simply compete in the marketplace and letting the natural forces of competition encourage innovation? Instead, we get patent holders trying to hold back competition and hold back innovation.
I think critics of the patent system need to be careful about over-stating our case. I don’t think software patents will destroy the VoIP industry. Rather, they will serve as a steady drag on the industry, raising the cost of doing business and forcing innovative upstarts to spend their money hiring lawyers rather than engineers. This will, in turn, tilt the playing field to Verizon’s benefit.
Most likely the result will be that small, entrepreneurial firms will be squeezed out of the market, leaving a bifurcated market between large, deep-pocketed incumbents on the one hand, and decentralized open source projects and overseas firms on the other. Tech-savvy Americans will have no trouble finding and installing innovative VoIP solutions, but for the vast bulk of Americans, they’ll have to use whatever Verizon and other patent-heavy firms choose to dish out. (It’s an interesting question what will happen to firms like Skype, Apple, AOL, and Google that offer “pure Internet” voice calling and have not, to date, made a significant dent on the telephone market)
What this case makes crystal clear is that there’s no appreciable connection between which innovating and getting patents. No one would argue that Verizon has been more innovative than Vonage in the VoIP market, yet because Verizon has spent more money filling for patents in recent years, Vonage is placed in the ridiculous position of paying Verizon for the privilege of using Verizon’s “inventions.”
In yesterday’s Wall Street Journal, Cyren Call Chairman Morgan O’Brien and Frontline Wireless Chairman Janice Obuchowski each had a letter to the editor responding to my March 13th op-ed about first responder communications. I’d like to take up just a few sentences to respond.
O’Brien writes that I “audaciously misrepresent[ed]” Cyren Call’s proposal, but does not point out what that misrepresentation was. So, I can’t answer. Obuchoski, on the other hand, does point out a misstatement about Frontline’s plan. She writes,
[Brito] misstates that the plan would build “an interoperable network over spectrum purchased at auction; but Frontline wants the FCC to restrict that spectrum to public safety use.” Frontline will offer commercial service in the spectrum won at auction and provide public safety with pre-emptible access during emergencies to this commercial spectrum to provide additional capacity during peak periods of crisis when first responders’ communications requirements spike. This spectrum would remain in commercial use at all other times.
The thing is, I have always fully understood that the Frontline proposal would share he spectrum with public safety and commercial users. The error was introduced by a WSJ edit made after the last version of the op-ed that I approved the evening before it was published. (I don’t blame the WSJ; they were probably just editing for length or style.)
Continue reading →
During our TLF happy hour last week (you can listen to the “live-from-the-bar” podcast here!), I got into a debate with some of my TLF colleagues about the future of physical versus non-physical media. I was making the argument that the impending death of physical media at the hands of intangible, digital storage has been greatly exaggerated. One of the points I made was that some people just love to “kick the tires” of their media and have something to look at and store on a shelf, whether it be a CD, a DVD, photo albums, a book or anything else. Even though I’m increasingly an all-digital storage guy like most of my TLF colleagues, there are still a lot of people out there who think different than us and prefer the old way of doing things. (I wrote about all this at greater length here).
But there’s another reason that physical media has a future: A lot of people just don’t give a damn about digital technology and the Internet at all. Really, it’s true! Just check out the results from this recent survey by Park Associates:
A little under one-third of U.S. households have no Internet access and do not plan to get it, with most of the holdouts seeing little use for it in their lives, according to a survey released on Friday. Park Associates, a Dallas-based technology market research firm, said 29 percent of U.S. households, or 31 million homes, do not have Internet access and do not intend to subscribe to an Internet service over the next 12 months.
The second annual National Technology Scan conducted by Park found the main reason potential customers say they do not subscribe to the Internet is because of the low value to their daily lives they perceive rather than concerns over cost. Forty-four percent of these households say they are not interested in anything on the Internet, versus just 22 percent who say they cannot afford a computer or the cost of Internet service, the survey showed. [emphasis added]
Continue reading →
Having smart readers is great! Check out the comments to my post on wireless commons, wherein TLF reader who actually know what they’re talking about elaborate on the strengths and weaknesses of unlicensed spectrum and mesh networks. For example:
In general, the concept of spectrum commons is intuitively appealing. Unlicensed spectrum has already proven its value with the proliferation of WiFi and the spectrum commons approach dangles the possibility of extending the promise of unlicensed spectrum to a near utopian degree. This is always presented as an superior alternative to the sclerotic bureaucracy of the FCC making decisions on spectrum use. However, in the real world where people are actually building modems, radios and consumer devices, the regulatory context of the FCC provides more than just an economic model of how spectrum is used (i.e. spectrum as property with markets vs unlicensed or common spectrum). It also provides a technical context for engineers who design and build the technology. RF is pretty wacky stuff and although increasing computational power and antenna technologies are of critical importance and key enablers to new wireless architectures and protocols, they don’t eliminate the world of cavity filters, intermodulation distortion, adjacent channel interference, etc.
Ultimately, the either/or approach is problematic. Spectrum commons, like unlicensed spectrum before it, hold great promise and regulatory bodies should embrace it by making spectrum available. But it’s also 10 or 20 years away from being ready for primetime. There’s a lot of usable radio spectrum. The real answer is to embrace and enable multiple approaches and philosophies of spectrum usage.
More good stuff here.
The Parents Television Council (PTC), a media activist group that routinely petitions Congress and the FCC for greater content regulation, recently released a new poll which they say proves that the V-Chip and parental control technologies have been a failure.
Their poll finds that only 11% of those surveyed said they used the V-chip or their cable box parental controls to block unwanted content from their television during the past week. And that result is virtually unchanged from a poll they took last September asking the same question. Therefore, the PTC concludes that recent efforts by broadcasters and cable companies to spend hundreds of millions of dollars educating families about these parental control tools have been a failure. And, unsurprisingly, the PTC feels that this again shows the need for government regulators need to step in and do more national nannying for us.
As I’ll make clear in a moment, the V-Chip and current television ratings are certainly not perfect. And I have no doubt that household usage of these tools is quite low for reasons I’ll get into. But let me first address what appears to be a rather glaring methodological deficiency of this PTC poll which makes it difficult to take seriously.
Continue reading →
Don Marti has an excellent analogy to help illustrate what’s wrong with software patents:
If Victor invents something, and I describe it in prose, I’m not infringing. If he invents something and I build it as hardware, I am. But if I do something in between between hardware and prose—”software”— where do you draw the line of where he can sue me? If Dr. David S. Touretzky doesn’t know where you draw the line between “speech” and “device” how should the courts know?
All of the arguments for software patents work just as well for prose patents. Just as a software patent covers the algorithm, not the code, a prose patent could cover the literary device, sequence of topics, or ideas used to produce some effect on the reader…
The debate over software patents isn’t just an attempt to set one arbitrary line between the patentable and the unpatentable. It’s about resisting the slide toward higher and higher transaction costs that happens when patents creep into places where they don’t make sense. We have algorithm patents but not prose patents because lawyers and judges use analogies and other prose inventions more than they use algorithms.
Quite so. I think the reason you see such violent and near-unanimous dislike for software patents among computer programmers is that it’s not an abstraction for them. For most people, software is just a magical icon that sits on their desktop and does stuff when they double click on it. The question of whether software should be covered by patents is akin to debates over who owns the moon: intellectually interesting, but not really relevant to their day-to-day lives. But what computer programmers see is that widespread enforcement of software patents would mean that a significant portion of their professional lives would suddenly require regular consultation with lawyers. This pisses them off in precisely the same way—and for precisely the same reasons—that patents on plot devices, analogies, literary styles, and other prose concepts would piss off writers.
Based on my reading of the complaint, Tim Wu’s speculation on Viacom’s strategy seems about right:
Viacom seem to be preparing to argue that, since Youtube plays such a role in hosting the videos, and doing things like screening porn, the videos are not, in fact “user-directed content,” the hosting of which is protected by 17 U.S.C. 512(c).
The main challenge for that argument is the text of 512(c), which protects “user-directed content” or “the storage at the direction of a user of material that resides on a system or network controlled or operated by or for the service provider.”
Continue reading →