This week I’m out in Las Vegas covering the grand-daddy of all industry trade shows–the Consumer Electronics Show (CES). It’s an amazing spectacle to behold and it’s impossible to even begin to summarize all the great gadgets I’m seeing and issue panels that I’m covering. But over the next few days I’ll try to share a few highlights.
Although things really don’t get into full swing until Monday morning of the event, Sunday featured several panel discussions about the future of the gaming industry. Faithful readers will recall my love of video games and my manycolumns on gaming issues.
I attended 5 different panel discussions. The first two proved the most interesting to me. They were entitled “Broadband Games Expand: From Casual to the Networked PC Universe” and “Entertainment as Franchise: Games Cross over into Music, TV, Cable, Movies, Mobile, Advertainment & Custom Branded Experience.” The other panels were on massive, multiplayer online games, mobile gaming, and cross-platform branding / advertsing. Here are few highlights:
The report puts to rest the commonly held belief that screen calibration problems could account for all of the reported instances of vote flipping. Some of the offending machines were not touchscreen models–voters used a selection wheel to make their choices. In other cases, the voters would make a touchscreen selection for one slate of candidates, only to have the summary screen (and in some cases the paper tape) report that half or more of the selections had been flipped.
Notably, there were reports of vote-flipping in the hotly contested FL-13 race in Sarasota County, FL. Most recently in the legal contest over that disputed race, a federal judge has declined the Jennings camp’s demands to see the source code to the voting machines used. (In effect, the judge has declared that America’s citizens are not allowed to see how the votes were counted in this very close race, because to reveal that information would violate the voting machine company’s “trade secrets.”)
Other problems described in the report included difficulties with printing the voter verified paper trails (VVPATs) that are required by law in a few states, and that may soon be required by federal law in all states. In some cases, the VVPATs didn’t match voter choices, but the most common problem was that they were simply unavailable, typically due to printing problems.
This is why it’s critical that the paper be the official record for the election result. If I were going to steal a DRE election, the first thing I would do is cause a “malfunction” with the machines’ printer–something that’s rather easy to simulate in software. Therefore, the paper trail is absolutely useless if voters are allowed to continue using machines that are unable to produce paper records. Only if pollworkers know that the paper ballot is what will actually get counted will they ensure that every voter’s vote is properly recorded on paper.
We’ve argued, along with many others, that it’s a clear benefit to the overall economy and the tech industry in particular to have skilled and educated immigrant workers come over from abroad. Still, it’s always nice to have some data to back this assertion up, just to ward off accusations of being a wild idealist. A new study published by Duke University finds that a full quarter of all tech startups between the years 1995-2005 had a immigrant either as a founder or key executive. These companies, it’s estimated, employed a total of 450,000 workers, and had revenues of $52 billion. The mistake made by those who oppose immigration for economic reasons is that they think of the overall economic picture as being fixed. In other words, they look, say, at the number of jobs in existence today, and simply assume that if more people compete for them, then domestic workers will increasingly go unemployed, while overall wages will be depressed. But as studies like this show, there’s nothing fixed about the economy. There’s always room for new startups, while existing companies will hire more people, assuming that they’re talented and can add value. As the researchers note, the process of immigration is inherently ambitious, and going through it is a sign of one’s inclination to take risks. As more data like this becomes available, it’s going to be an increasingly difficult argument to make that an intelligent and skilled immigrant workers somehow drag down the economy.
I think it’s just nuts that we place so many restrictions on immigration by highly-skilled workers. One can make a plausible argument (one I don’t agree with, but plausible) that current limits on immigration of low-skilled workers are necessary to avoid placing undue burdens on taxpayers, given that low-skiled immigrants might collect more in social services than they pay in taxes. But this argument simply doesn’t apply to a guy with 20 years of experience as a computer programmer or a master’s degree in economics. Such workers are all but guaranteed to have well-paying jobs and contribute to the tax base in the short run. And in the long run, some of them will go on to create successful businesses that will employ Americans and create new wealth.
So I don’t know why we don’t let every single person who has an advanced degree or can demonstrate significant technical skills into the country. It’s good for the immigrants, it’s good for the companies that employ them, and in the long run it’s good for everyone.
Matt Yglesias points out that the most important thing we need to do to address the problem of the classics becoming inaccessible involves fixing copyright law:
All the ‘sphere’s a twitter about some libraries dumping little-read classics in favor of more high-demand contemporary bestsellers. Julian’s post on this, however, inspired me to remark that far and away the most important thing for the preservation of the classics has nothing to do with library policies and everything to do with intellectual property policy.
In a world where classic works enter the public domain, people will get them one way or another. They’ll be available for free download on the internet. E-book technology will improve. Print copies will cheaply available to people who want to buy them. Whether or not these things are in local libraries sort of won’t be a huge deal one way or another. Now, traditionally, copyrights have had limited durations and “classic” books, being old by definition, tend to be in the public domain and hence widely available. In a digital era, they’ll be super-available. But the emerging trend of the digital era is for retroactive extensions of copyright terms meaning that nothing new will ever enter the public domain. Ever.
Quite so. The vast majority of books from the 20s, 30s, 40s, and so on are currently sitting on dusty library shelves, hardly ever looked at by anyone. We now have the technology to digitize all those books and turn them into a veritable treasure trove of easily-searchable information about decades gone by. Yet in order to ensure that Disney continues to turn a profit on Mickey Mouse, making those hundreds of thousands of commercially worthless works available to the general public is effectively illegal. In effect, 80 years of 20th century culture is in danger of being locked up so that a small number of copyright holders can profit from the miniscule fraction of works from the 1920s that still have commercial value. It’s really quite a shame.
And don’t get me started on the way that copyright law is hampering film preservation.
One of the cool things about having a blog is having readers who know more than you do. (Or, conversely, maybe the frustrating thing about reading TLF is that we often don’t know what we’re talking about) Anyway, reader Jim Lippard notes some problems with my recent spaceposts:
NASA hasn’t been doing a whole lot of commercial space business for the last few years–there were no shuttle flights from the destruction of Columbia on February 1, 2003 until the launch of Discovery on July 26, 2005 (STS-114). It had some of the same issues as Columbia, so there wasn’t another flight until July 4, 2006 (STS-121). STS-115 launched on September 9, 2006, STS-116 launched on December 9, 2006. STS-117 is scheduled for mid-March 2007.
Of recent years’ shuttle launches, only STS-116 launched satellites, and it was the first to do so since STS-113 (launched November 24, 2002).
If you need a satellite launched in a hurry, NASA is not the place to go… you’re better off going with Sea Launch, a consortium managed by Boeing, that is the company that puts satellites into orbit for XM, EchoStar, and DirecTV.
So far as I can see, NASA is the U.S. Postal Service of satellite launches, while Sea Launch is the FedEx. It doesn’t look to me like NASA is inhibiting private space companies at all.
I stand corrected. I would be very interested in knowing more about how Sea Launch and NASA’s launch costs compare.
Cool! Amazon.com founder Jeff Bezos has announced the first test flight of a series that he hopes will culminate in a vehicle capable of taking astronauts into space:
Despite the inflationary effects of NASA subsidies, it seems that the cost of private space travel are beginning to drop to the level where it’s in the reach of private individuals.
I found this post, from our friends over at Public Knowledge, rather puzzling. Art Brodsky thinks that the FCC’s decision to mandate that local franchise authorities approve franchise requests within 90 days “won’t help consumers.” But after reading his post twice, his argument strikes me as underwhelming. He acknowledges that…
There’s general agreement in principle that competition can benefit consumers. In the rare examples so far of competition in video, the over-priced cable services have had to lower their rates to compete with telephone company entrants. A cynic might say that over time, as telephone companies enter the market, that a nice, comfy duopoly will settle in and price decreases will moderate. For now, the idea of cable having some serious competition is good.
So what’s the problem? He notes that the FCC may have exceeded its authority, which is an important point but hardly evidence that the proposal is harmful to consumers. The real meat of his objection seems to be that…
The document I discussed in my previous post also provides concrete examples of a point I’ve made before: DRM is the technological equivalent of central planning. In their efforts to prevent piracy, the Windows A/V system is becoming more and more monolithic, with the operating system performing more and more functions that would traditionally be performed by third party software, and prohibiting third party software from overriding those functions. Monolithic software has many of the same flaws that centrally planned economies do: the thousands of third-party software developers out there have local knowledge about what their customers want that Microsoft does not possess. Hence, the more Microsoft centralizes decisions about what A/V features the OS will have, the more likely they’ll screw up and fail to provide needed functionality. That gives you lovely outcomes like this:
As well as overt disabling of functionality, there’s also covert disabling of functionality. For example PC voice communications rely on automatic echo cancellation (AEC) in order to work. AEC requires feeding back a sample of the audio mix into the echo cancellation subsystem, but with Vista’s content protection this isn’t permitted any more because this might allow access to premium content. What is permitted is a highly-degraded form of feedback that might possibly still sort-of be enough for some sort of minimal echo cancellation purposes.
The requirement to disable audio and video output plays havoc with standard system operations, because the security policy used is a so-called “system high” policy: The overall sensitivity level is that of the most sensitive data present in the system. So the instant any audio derived from premium content appears on your system, signal degradation and disabling of outputs will occur.
David Levine links to this fantastic summary of the new copy protection standards in Windows Vista, and the many problems those standards are likely to cause. Here’s the upshot:
In order to work, Vista’s content protection must be able to violate the laws of physics, something that’s unlikely to happen no matter how much the content industry wishes it were possible. This conundrum is displayed over and over again in the Windows content-protection requirements, with manufacturers being given no hard-and-fast guidelines but instead being instructed that they need to display as much dedication as possible to the party line. The documentation is peppered with sentences like:
“It is recommended that a graphics manufacturer go beyond the strict letter of the specification and provide additional content-protection features, because this demonstrates their strong intent to protect premium content”.
This is an exceedingly strange way to write technical specifications, but is dictated by the fact that what the spec is trying to achieve is fundamentally impossible. Readers should keep this requirement to display appropriate levels of dedication in mind when reading the following analysis.
What we see throughout the document, is the kind of thrashing that inevitably occurs when an industry’s management orders its engineers to do something that’s technically impossible. They’re forced to go to ever-more-heroic lengths to accomplish the impossible goal, leading to more and more bad design decisions. The result, in this case, is that the quality of all the A/V across the entire operating system is degraded any time there’s any “premium” content being displayed and a single “non-secure” device is installed on the computer. All this effort still isn’t going to stop copyright infringement, but it’s going to be a major pain in the ass for consumers.
Via Mike Linksvayer, the Mozilla Foundation has reported that it took in $52.9 million in revenues in 2005, mostly from “our search engine relationships,” which I think mostly means payments from Google to have their search engine be the default in the FireFox toolbar. This more or less confirms rumors that were reported last year on Mozilla’s revenues.
This is fantastic news, and given that the search engine wars show no sign of abating, I have to imagine they earned similar revenues in 2006. This provides a big pot of money they can use to promote further improvements to FireFox and Mozilla’s other products, or to spend helping to support the work of open source developers working on other projects.
I occasionally see critics of open source software complain that their lack of revenues proves that “the market” has rejected open source software. But here we have a pretty clear counter-example. The Mozilla community has created a product that’s so valuable that they’ve stumbled upon a “business model” for it–almost by accident–that’s worth $50 million. And given that this is a product that’s given away for free to tens of millions of users, it’s a safe bet that if you could put a dollar figure on the total wealth created by the Mozilla project, it would be a lot larger than that.
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →