Here’s something cool.
My Web site, WashingtonWatch.com, now has a widget that will allow you to display the voting of site visitors on particular bills.
For example, here’s the tally on H.R. 2821, The Television Freedom Act of 2007:
And here’s one of our favorite bills, H.R. 3773, The RESTORE Act of 2007, which amends the FISA law, possibly giving immunity to telecom companies that broke the law:
As I write this, it has more votes in favor than against. What’s up with you, America?
Get yourself and your neighbors involved, people.
As the days tick down to Halloween — and the formal expiration of the Internet tax moratorium — there’s a strong feeling of deja vu in Washington. It’s like we’ve all been through this before.
We have. In 2004. And 2001. The periodic last-minute extension of the moratorium has become a regular feature of Washington’s political life. Which leads many to wonder: Why not just make the tax ban permanent?
The arguments for restrictions on state and local taxes are strong (they are summarized in a new Heritage Foundation paper just released this week). But still, policymakers seem reluctant to take the plunge toward permanence, with the House voting last week for yet another temporary extension.
Opponents — such as Tennessee’s Lamar Alexander — have argued strenuously against anything more long-lasting. With the Internet changing so quickly, it doesn’t make sense to write Internet tax policy into stone, they argue.
But it’s hard to believe that many are actually convinced by this. After all, with nine years’ experience with the moratorium, this is hardly an experimental policy. And Congress always keeps the option of changing things if the needs arise. Just look at the amount of tinkering that goes on with the rest of the tax code.
So why so much support for yet another temporary suspension? It’s certainly not because Internet taxation is popular — there just aren’t a lot of voters out there demanding more fees on their DSL lines.
Strangely, the problem may be the opposite: The idea of taxing the Internet is unpopular, and members get a boost from voting to ban it. And temporary extensions let them vote to ban it again and again and again. A permanent ban would stop the fun.
Continue reading →
Steven Pearlstein, a business columnist for The Washington Post, has an interesting editorial up today wondering whether Google is the next AT&T, IBM, Intel or Microsoft in the sense that, like those companies, Google might be headed for increased antitrust or regulatory scrutiny based on its marketplace success:
With its proposed purchase of DoubleClick, Google has followed suit in drawing the scrutiny of the competition police, both at home and in Europe. The reason is simple: Like its predecessors, Google shows every sign of pulling away from the pack in a market that naturally tends toward a single, dominant firm.
Pearlstein goes on to explain how Google’s business model works in layman’s terms and then points out why there is little to fear from Google’s proposed acquisition of DoubleClick:
Continue reading →
I mean, some kind of feline, obviously, but what kind? Some kind of wildcat? Something crossed with a domestic? T’ain’t no pixie-bob.
http://www.diesel.pp.net.ua/news/2007-02-15-104
I felt like I was reading a story from the future when I read this lead from a news article about a Microsoft exec. pleading why desktop software is still relevant:
A top Microsoft executive defended desktop application software, the
source of the company’s revenue for three decades, arguing on Tuesday
that even services-based companies such as Google still need it.
But then I just realized that I’m old, and time and the competitive software marketplace has moved quickly the past few years.
Nevertheless, I’m so intrigued by all the new business models that are vying for both the business customer and consumers like you and me that I’m currently writing a paper on it. My public policy bent is nuanced, but relevant: do new business models (not a single technology, not a specific technology, but a particular way of doing business like licensing, services, ad-based) need a regulatory helping hand to compete? I’m talking about interoperability mandates, spectrum auction rules, standards…you get the drift. Of course, I’m going to have to say that even if you can think of a reason for antitrust regulation, FCC intervention, etc., there are countervailing reasons against government regulation that are likely more compelling. Back to paper writing….
Threat Level offers some safety tips for laptop users accessing public hotspots:
“The most dangerous places to connect are airports, hotels, convention centers,” say Richard Rushing, Chief Security Officer for AirDefense, which does wireless security. “And most people use credit cards there.”
Oops. I am hooking up to the San Diego Convention Center’s wireless and paying for with a credit card as he says this. Apparently lots of other people are too because a snicker rings through the workshop here at ToorCon9.
By their nature, WiFi hotspots are insecure, he says, though they can be made more secure by using client isolation, which makes it harder to slide up and down the communications links from the server to the client and web.
“Client isolation should be turned on but we can still spoof the address or take the address backwards,” he says, noting that Macs are easily spoofed.
“Hot spots are really set up for the bad guys,” he says.
When Rushing looked at hotspot users, he found 30 percent have no firewalls and 3 percent have active malware they’re inadvertantly introducing to the servers.
This is probably an issue I should have mentioned in my Times piece. It’s true that the risks of sharing your wireless connection are not zero: it does make it the possible for other users on the network to scan your machine for vulnerabilities. However, the tips about public hotspots helps to put that risk in perspective; your laptop is far more likely to encounter someone malicious in an an airport or coffee shop, which is teeming with strangers, than in your home. So if you’re worried about the security risks of sharing your home wireless connection, you should be a lot more reticent about using public access points. The nature of the security risks involved are identical, and the number of potential adversaries is much higher on a public hotspot.
If you’ve never experienced the World Wide Web, you need to read Daniel Solove’s The Future of Reputation: Gossip, Rumor, and Privacy on the Internet. But if you have used the Web, you’ll wonder about passages like this, rudiments that routinely crop up in the book:
When . . . bloggers find a post interesting, they will link to it. A “link” is a hyperlink, text that whisks you at a click to another webpage. The Web is interlaced with links, a giant latticework of connections between websites, where Internet traffic fires like synapses in a gigantic brain.
But forgiving these curiosities, the reader joins Solove on a whirl through some interesting problems created by the new medium of the Internet. Chiefly, personal information is persistent and amenable to copying. This means that slights and slanders can be magnified. Fairly or unfairly, the Internet can break people’s reputations.
Continue reading →
I have a new paper out this week entitled “Unplugging Plug-and-Play Regulation” in which I discuss the ongoing dispute between cable operators and the consumer electronics industry over “digital cable ready” equipment and “plug-and-play” interactive applications. Basically, it’s a fight about how various features or services available on cable systems should work, including electronic programming guides (EPGs), video-on-demand (VOD), pay-per-view (PPV) services, and other interactive television (ITV) capabilities.
This fight is now before the Federal Communications Commission where the Consumer Electronics Association (CEA) has asked the agency to mandate certain standards for those next-generation interactive video services. In my paper, I argue that regulation is unwise:
Ongoing marketplace experimentation and private negotiations represent the better way to establish technical standards. There is no need for the government to involve itself in a private standard-setting dispute between sophisticated, capable industries like consumer electronics and cable. And increased platform competition, not more government regulation of cable platforms, is the better way to ensure that innovation flourishes and consumers gain access to exciting new services.
To read the entire 7-page paper, click here.
Ed Felten isn’t impressed with Comcast’s traffic shaping techniques:
Comcast is using an unusual and nonstandard form of blocking. There are well-established mechanisms for dealing with traffic congestion on the Internet. Networks are supposed to respond to congestion by dropping packets; endpoint computers notice that their packets are being dropped and respond by slowing their transmissions, thus relieving the congestion. The idea sounds simple, but getting the details right, so that the endpoints slow down just enough but not too much, and the network responds quickly to changes in traffic level but doesn’t overreact, required some very clever, subtle engineering.
What Comcast is doing instead is to cut off connections by sending forged TCP Reset packets to the endpoints. Reset packets are supposed to be used by one endpoint to tell the other endpoint that an unexplained, unrecoverable error has occurred and therefore communication cannot continue. Comcast’s equipment (apparently made by a company called Sandvine) seems to send both endpoints a Reset packet, purporting to come from the other endpoint, which causes both endpoints to break the connection. Doing this is a violation of the TCP protocol, which has at least two ill effects: it bypasses TCP’s well-engineered mechanisms for handling congestion, and it erodes the usefulness of Reset packets as true indicators of error.
This brings to mind a question: as I understand it, TCP relies to some extent on clients being well-behaved and voluntarily backing off when faced with congestion problems. Is it possible that part of the reason that Comcast chose to target P2P applications specifically is that these aren’t “well-behaved” applications in this sense? Richard seems to be implying that this is the case. Is he right?
While Comcast scrambles to explain itself, and those better versed in the technical issues debate the merits (see the comments) of what they surmise Comcast to be doing, I think it’s important to focus on another angle.
Look at the press and consumer reaction to the allegation that Comcast defied the public’s expectations. For example, Rob Pegoraro of the Washington Post has announced that he is investigating the issue for his column on Thursday, and has asked the public to help inform his thinking.
A mass of Comcast customers are weighing in, fairly or unfairly heaping a wide array of Internet woes on this ISP. And here’s a key quote from one commenter: “I got rid of comcast the second that Verizon FIOS was available in my neighborhood . . . .”
Are consumers helpless against the predation, real or imagined, of this ISP? No they are not. The market forces playing out before us right now are bringing Comcast sharply to heel – and other ISPs too: they are watching with keen interest – nevermind whether Comcast has done anything wrong from a technical or “neutrality” standpoint.
The challenge again is for proponents of broadband regulation to show how law, regulation, and a regulatory agency could do a better job than the collective brainpower and energy of the Internet community.