FCC Killing BitTorrent? Not Exactly

by on August 9, 2008 · 42 comments

I’m always interested in stories about the unintended consequences of government regulation, but this story from Valleywag (Via a comment from Richard Bennett) doesn’t make a lot of sense:

The prospect of pay-by-the-bit bandwidth had immediate consequences for BitTorrent’s two main businesses: an online-media store delivered via file sharing, and a content-delivery network which competed with the likes of Akamai and Limelight Networks.

For users who would have to pay bandwidth fees to their ISPs on top of paying the usual charges, BitTorrent’s Torrent Entertainment Network store would soon look uncompetitive with the likes of Apple’s iTunes Store and Microsoft’s Xbox Marketplace — which prompted Best Buy to back out of talks to acquire TEN for $15 million.

As for BitTorrent’s content-delivery network, it was premised on the notion that BitTorrent would negotiate with ISPs to get privileged delivery for their file-sharing packets, while Comcast blocked others. With the FCC forcing Comcast to treat all file-sharing traffic equally, the promise of that business evaporated.

The obvious problem with this is that Apple, Microsoft, Akamai, et al haven’t negotiated privileged bandwidth agreements with ISPs either. If users have to pay their ISPs extra to download a 10 gig HD movie from BitTorrent, they’re going to have to do the same to download HD movies from iTunes or the XBox store. BitTorrent’s big advantage is that they face dramatically lower bandwidth costs on the other side of the pipe, because their users share files with each other rather than everyone getting bandwidth from the server. If bandwidth caps and metering doom BitTorrent, then they doom iTunes and the XBox store too. Somehow, I don’t think we’re about to see the end of video download services.

  • Ryan Radia

    Unless Valleywag knows something we don’t, that story makes no sense. Though Limelight probably would attempt to negotiate privileged delivery if a major ISP were to implement metered pricing. I wonder if that would incite the neutrality brigade?

    I wouldn’t rule out the possibility of metered pricing hurting online video downloading. See Bell Canada’s caps, Comcast’s proposed $10/15GB overage fee, and Time Warner’s trial in Beaumont. None of these regimes affect users who only download a handful of high-def movies each months, but for a family who watches 30 or 40 movies a month, the prospect of a $5 per movie bandwidth charge would really sting. I’d like to see gigabytes priced at a more reasonable level, say $0.25 per GB, or something more closely tied with the actual transit cost.

  • http://blog.xflames.com Peter

    The difference is with peer-to-peer, if you pay for up and down traffic, then you either don’t share, or you pay twice. In the first case (you don’t share), BitTorrent has no advantage over any other service, everyone downloads from them. In the second case, consumers will think twice about using a service that doubles or more the bandwidth they use.

    I’m not saying it would kill BitTorrent, or that the other points the original post was making are correct, just that there is a reason that bandwidth charges could impact peer-to-peer delivery systems more than other delivery methods.

  • Ryan Radia

    Unless Valleywag knows something we don’t, that story makes no sense. Though Limelight probably would attempt to negotiate privileged delivery if a major ISP were to implement metered pricing. I wonder if that would incite the neutrality brigade?

    I wouldn’t rule out the possibility of metered pricing hurting online video downloading. See Bell Canada’s caps, Comcast’s proposed $10/15GB overage fee, and Time Warner’s trial in Beaumont. None of these regimes affect users who only download a handful of high-def movies each months, but for a family who watches 30 or 40 movies a month, the prospect of a $5 per movie bandwidth charge would really sting. I’d like to see gigabytes priced at a more reasonable level, say $0.25 per GB, or something more closely tied with the actual transit cost.

  • http://blog.xflames.com Peter

    The difference is with peer-to-peer, if you pay for up and down traffic, then you either don’t share, or you pay twice. In the first case (you don’t share), BitTorrent has no advantage over any other service, everyone downloads from them. In the second case, consumers will think twice about using a service that doubles or more the bandwidth they use.

    I’m not saying it would kill BitTorrent, or that the other points the original post was making are correct, just that there is a reason that bandwidth charges could impact peer-to-peer delivery systems more than other delivery methods.

  • http://www.thestalwart.com Joseph Weisenthal

    This post is a pretty solid takedown of the ValleyWag article, which is mainly an attempt to add a sensational narrative to the core story — BitTorrent’s layoffs.

    That being said: Like a lot of folks, I really don’t get the idea behind BitTorrent having its own retail media operation. If, as you say, they thought they had an advantage over Apple et. al. courtesy of lower bandwidth costs (on their end) then they’re sorely mistaken. Users, obviously, don’t give a pig’s lick about the bandwidth cost of their provider. And so having a lower cost basis per GB is hardly a competitive advantage.

    As for the layoffs, part of the problem facing their CDN-like business is their name: BitTorrent. Their natural customers — major media companies delving online media — associate BitTorrent with a system of piracy (fairly or unfairly). They have to spend energy getting their foot in the door — explaining that it’s not the same thing — when other competitors in the p2p distro space don’t have to.

  • http://bennett.com/blog Richard Bennett

    Peter puts his finger on your fundamental misunderstanding, Tim: P2P facilitates downloading by appropriating scarce and unmetered upstream bandwidth. If ISPs can’t control use of the upstream path (from the user to the Internet) by prioritizing and nice traffic shaping and policing, they will have no choice but the meter it. A metered retail service will certainly be more costly than the metered wholesale services purchased by iTunes, Amazon, and Google/YouTube.

    P2P as a commercial service hasn’t fared well, and nothing in the FCC’s ruling gives it any comfort. So rather than protecting what the Commissioners called an “innovate new application” they hastened its demise as a commercial service.

    P2P is primarily useful for piracy, because any commercial content service collecting fees can cover the costs of feeding their downloaders.

    So once again, the fundamental distinction you need to make is between upstream traffic and downstream traffic, where “up” and “down” are from the consumer perspective. Broadband networks are engineered for downloading, not for uploading.

  • http://bennett.com/blog Richard Bennett

    I misspelled “innovative” in my comment.

    But I spelled “up” and “down” correctly, and that’s the point.

  • http://www.thestalwart.com Joseph Weisenthal

    This post is a pretty solid takedown of the ValleyWag article, which is mainly an attempt to add a sensational narrative to the core story — BitTorrent’s layoffs.

    That being said: Like a lot of folks, I really don’t get the idea behind BitTorrent having its own retail media operation. If, as you say, they thought they had an advantage over Apple et. al. courtesy of lower bandwidth costs (on their end) then they’re sorely mistaken. Users, obviously, don’t give a pig’s lick about the bandwidth cost of their provider. And so having a lower cost basis per GB is hardly a competitive advantage.

    As for the layoffs, part of the problem facing their CDN-like business is their name: BitTorrent. Their natural customers — major media companies delving online media — associate BitTorrent with a system of piracy (fairly or unfairly). They have to spend energy getting their foot in the door — explaining that it’s not the same thing — when other competitors in the p2p distro space don’t have to.

  • http://bennett.com/blog Richard Bennett

    Peter puts his finger on your fundamental misunderstanding, Tim: P2P facilitates downloading by appropriating scarce and unmetered upstream bandwidth. If ISPs can’t control use of the upstream path (from the user to the Internet) by prioritizing and nice traffic shaping and policing, they will have no choice but the meter it. A metered retail service will certainly be more costly than the metered wholesale services purchased by iTunes, Amazon, and Google/YouTube.

    P2P as a commercial service hasn’t fared well, and nothing in the FCC’s ruling gives it any comfort. So rather than protecting what the Commissioners called an “innovate new application” they hastened its demise as a commercial service.

    P2P is primarily useful for piracy, because any commercial content service collecting fees can cover the costs of feeding their downloaders.

    So once again, the fundamental distinction you need to make is between upstream traffic and downstream traffic, where “up” and “down” are from the consumer perspective. Broadband networks are engineered for downloading, not for uploading.

  • http://bennett.com/blog Richard Bennett

    I misspelled “innovative” in my comment.

    But I spelled “up” and “down” correctly, and that’s the point.

  • http://blaynesucks.com Aaron Massey

    I have to disagree with Richard, somewhat. The fundamental distinction is not simply the difference between upsteam and downstream. The fundamental difference is connectivity “in the last mile” versus “in the cloud.”

    The last leg of a connection from an ISP to a residential home user is usually low bandwidth. P2P traffic eats this up because as Peter noted earlier the traffic doubles *in the last mile* since end users are both uploading and downloading. However, the total number of bits moving on the network are the same.

    Let’s take an (overly simple) example. If five people download a 100 MB file using traditional services from some server outside of their local network then the ISP has 500 MB of data travel from that server through their local branch and then down to each of their end users. In other words, 500 MB travels both “in the cloud” and in “the last mile” for a total of 1000 bits traveling on the network.

    However, if the same five people download the same 100 MB file using BitTorrent, then it could easily be the case that the first one gets it entirely through the cloud (100 MB in the cloud) and the rest get it by downloading it through last mile connections (900 MB in the last mile). Since the last mile is usually a much lower capacity, this can put a real strain on ISPs.

    Of course, protocols like BitTorrent have serious advantages too. Think about the situation where the server is on the other half of the world. Now that 100 MB file could be traveling across many, many hops. Let’s say it takes 4 hops “in the cloud” before getting to the end user. In the first case that’s 2000 MB in the cloud and 500 MB in the last mile. In the second (BitTorrent) case, that’s only 400 MB in the cloud and 900 in the last mile. Total bits traveled in the first case is 2500 MB versus 1300 MB in the second (BitTorrent) case.

    From a single ISP’s standpoint though the first case is preferred. They don’t care about the network efficiency for other folks in the cloud. They only care that their networks are far more crowded.

    Richard is absolutely right that ISPs are engineered for downloading. This is why the last mile bandwidth is thin. Most of the time there isn’t a lot of traffic there as compared to “in the cloud.”

    However, P2P is simply more efficient from a network standpoint. Eventually, ISPs that adjust their networks to better support last mile connectivity will be able to produce a better product for their customers. (It’s a lot faster to download something from your neighbor using a P2P client than from some server halfway around the world.)

  • http://blaynesucks.com Aaron Massey

    I have to disagree with Richard, somewhat. The fundamental distinction is not simply the difference between upsteam and downstream. The fundamental difference is connectivity “in the last mile” versus “in the cloud.”

    The last leg of a connection from an ISP to a residential home user is usually low bandwidth. P2P traffic eats this up because as Peter noted earlier the traffic doubles *in the last mile* since end users are both uploading and downloading. However, the total number of bits moving on the network are the same.

    Let’s take an (overly simple) example. If five people download a 100 MB file using traditional services from some server outside of their local network then the ISP has 500 MB of data travel from that server through their local branch and then down to each of their end users. In other words, 500 MB travels both “in the cloud” and in “the last mile” for a total of 1000 bits traveling on the network.

    However, if the same five people download the same 100 MB file using BitTorrent, then it could easily be the case that the first one gets it entirely through the cloud (100 MB in the cloud) and the rest get it by downloading it through last mile connections (900 MB in the last mile). Since the last mile is usually a much lower capacity, this can put a real strain on ISPs.

    Of course, protocols like BitTorrent have serious advantages too. Think about the situation where the server is on the other half of the world. Now that 100 MB file could be traveling across many, many hops. Let’s say it takes 4 hops “in the cloud” before getting to the end user. In the first case that’s 2000 MB in the cloud and 500 MB in the last mile. In the second (BitTorrent) case, that’s only 400 MB in the cloud and 900 in the last mile. Total bits traveled in the first case is 2500 MB versus 1300 MB in the second (BitTorrent) case.

    From a single ISP’s standpoint though the first case is preferred. They don’t care about the network efficiency for other folks in the cloud. They only care that their networks are far more crowded.

    Richard is absolutely right that ISPs are engineered for downloading. This is why the last mile bandwidth is thin. Most of the time there isn’t a lot of traffic there as compared to “in the cloud.”

    However, P2P is simply more efficient from a network standpoint. Eventually, ISPs that adjust their networks to better support last mile connectivity will be able to produce a better product for their customers. (It’s a lot faster to download something from your neighbor using a P2P client than from some server halfway around the world.)

  • Tim Lee

    Richard is absolutely right that ISPs are engineered for downloading. This is why the last mile bandwidth is thin.

    I don’t think it’s even possible to make this blanket statement. FiOS, for example, gives customers a 10-50 mbit dedicated, symmetrical last mile connection. If a Verizon customer wants a 10 GB HD movie file, I don’t think it’s at all obvious that Verizon would prefer that the whole 10 GB come via long-haul pipes on the other side of the world instead of in 10 equally-sized pieces from other local Verizon customers. Depending on the network topology and which parts of the network happen to be congested at that moment, either option could place less strain on Verizon’s network.

    The story’s obviously different for a cable network that’s engineered for downloads at the expense of uploads, but that’s a design choice on the part of the cable companies, not an inherent technological limitation. They could—and if peer-to-peer continues to grow, probably will—re-design their network to better support peer-to-peer traffic.

  • http://bennett.com/blog Richard Bennett

    DSL and FiOS are also asymmetric on the first, second, and third hops. Verizon offers a 20/20 service for file sharers, but the vast majority of FiOS accounts are 15/5 and the network is engineered internally to support more downloads than uploads.

    Aaron makes one big mistake in his analysis of P2P efficiency and performance. In the unlikely event that a P2P downloader can find all his content inside his ISP’s network, it’s not typically going to be faster to download than it locally than it would be remotely. That’s because the local upload pipes are much more constrained than the upload pipes in the cloud, especially those from CDNs. CDNs are also asymmetric, since they need to support more upload (from their point of view) than download.

    The reason BitTorrent often fetches content from far away is that it selects strictly on the basis of bandwidth, not distance.

    It’s also a bit misleading to call BT more efficient because it might require fewer hops if the hops it uses are more congested.

  • Tim Lee

    Richard is absolutely right that ISPs are engineered for downloading. This is why the last mile bandwidth is thin.

    I don’t think it’s even possible to make this blanket statement. FiOS, for example, gives customers a 10-50 mbit dedicated, symmetrical last mile connection. If a Verizon customer wants a 10 GB HD movie file, I don’t think it’s at all obvious that Verizon would prefer that the whole 10 GB come via long-haul pipes on the other side of the world instead of in 10 equally-sized pieces from other local Verizon customers. Depending on the network topology and which parts of the network happen to be congested at that moment, either option could place less strain on Verizon’s network.

    The story’s obviously different for a cable network that’s engineered for downloads at the expense of uploads, but that’s a design choice on the part of the cable companies, not an inherent technological limitation. They could—and if peer-to-peer continues to grow, probably will—re-design their network to better support peer-to-peer traffic.

  • http://bennett.com/blog Richard Bennett

    DSL and FiOS are also asymmetric on the first, second, and third hops. Verizon offers a 20/20 service for file sharers, but the vast majority of FiOS accounts are 15/5 and the network is engineered internally to support more downloads than uploads.

    Aaron makes one big mistake in his analysis of P2P efficiency and performance. In the unlikely event that a P2P downloader can find all his content inside his ISP’s network, it’s not typically going to be faster to download than it locally than it would be remotely. That’s because the local upload pipes are much more constrained than the upload pipes in the cloud, especially those from CDNs. CDNs are also asymmetric, since they need to support more upload (from their point of view) than download.

    The reason BitTorrent often fetches content from far away is that it selects strictly on the basis of bandwidth, not distance.

    It’s also a bit misleading to call BT more efficient because it might require fewer hops if the hops it uses are more congested.

  • http://blaynesucks.com Aaron Massey

    I don’t think it’s even possible to make this blanket statement.

    Well, it’s certainly possible to make a generalization, but is it correct? :-P I should have prefaced it with “the vast majority” since I’m sure there’s an exception, though I sincerely doubt that Verizon is one of them.

    At the most basic level ISPs tend towards enabling downloads simply because that’s how most people use the Internet. Home Internet connections spend the vast majority of their time idle. Of the time they are active, the vast majority of that is downloading something. ISPs have engineered their systems to respond to this.

    Take a step back from the asymmetric vs. symmetric arguments (although Richard is right about this). If you were an ISP, why would you build out the incredibly expensive infrastructure in a way other than that which was the norm for real world use? I would be stunned if Verizon didn’t have the same kind of traffic congestion that Comcast did if all it’s customers decided to download files from local peers on the Verizon network rather than from the Internet at large.

    It’s also a bit misleading to call BT more efficient because it might require fewer hops if the hops it uses are more congested.

    Again, I have to disagree with you, somewhat. BT is simply a more efficient protocol. Real world use of BT is crippled by the current architecture of most ISPs. Again, this is why Comcast was being overrun by BT traffic. As a result it may sometimes be more efficient to use BT and sometimes less so.

    Think about it this way. When Lake Pontchartrain only had a single two-lane causeway crossing it, there was a serious argument to be made at certain times of the day that it would be more efficient to simply drive around the lake rather than over it. Does that mean that the concept of using a bridge is less efficient? No, in theory, a bridge is more efficient than driving around an obstacle. However, the real world situation may have differed simply because of the current state of the infrastructure.

    Eventually, the last mile connectivity problems will be resolved (probably with some wireless solution similar to WiMax, only better — there is a reason why Comcast and TimeWarner are both investors in Clearwire). At that point, the theoretical efficiency gains to be had with protocols like BT will be even more easy to achieve in the real world.

  • http://blaynesucks.com Aaron Massey

    I don’t think it’s even possible to make this blanket statement.

    Well, it’s certainly possible to make a generalization, but is it correct? :-P I should have prefaced it with “the vast majority” since I’m sure there’s an exception, though I sincerely doubt that Verizon is one of them.

    At the most basic level ISPs tend towards enabling downloads simply because that’s how most people use the Internet. Home Internet connections spend the vast majority of their time idle. Of the time they are active, the vast majority of that is downloading something. ISPs have engineered their systems to respond to this.

    Take a step back from the asymmetric vs. symmetric arguments (although Richard is right about this). If you were an ISP, why would you build out the incredibly expensive infrastructure in a way other than that which was the norm for real world use? I would be stunned if Verizon didn’t have the same kind of traffic congestion that Comcast did if all it’s customers decided to download files from local peers on the Verizon network rather than from the Internet at large.

    It’s also a bit misleading to call BT more efficient because it might require fewer hops if the hops it uses are more congested.

    Again, I have to disagree with you, somewhat. BT is simply a more efficient protocol. Real world use of BT is crippled by the current architecture of most ISPs. Again, this is why Comcast was being overrun by BT traffic. As a result it may sometimes be more efficient to use BT and sometimes less so.

    Think about it this way. When Lake Pontchartrain only had a single two-lane causeway crossing it, there was a serious argument to be made at certain times of the day that it would be more efficient to simply drive around the lake rather than over it. Does that mean that the concept of using a bridge is less efficient? No, in theory, a bridge is more efficient than driving around an obstacle. However, the real world situation may have differed simply because of the current state of the infrastructure.

    Eventually, the last mile connectivity problems will be resolved (probably with some wireless solution similar to WiMax, only better — there is a reason why Comcast and TimeWarner are both investors in Clearwire). At that point, the theoretical efficiency gains to be had with protocols like BT will be even more easy to achieve in the real world.

  • http://bennett.com/blog Richard Bennett

    Regarding Aaron’s claim: “BT is simply a more efficient protocol” I’d have to say that protocol efficiency doesn’t exist in a vacuum, it’s a function of the underlying infrastructure. P2P would be fine for an infrastructure in which all links were uniform capacity and fully symmetrical, but that’s not the world we live in. Our networks are tuned for short bursts of upload traffic – requests and commands – and longer bursts of download traffic. So a truly efficient protocol for this infrastructure would simply find the least congested download links and least loaded servers. You can build that by installing lots of caches.

    But infrastructure is a designed artifact as well, so there are all sorts of interesting questions about just how it should be provisioned to support diverse uses.

  • http://bennett.com/blog Richard Bennett

    Regarding Aaron’s claim: “BT is simply a more efficient protocol” I’d have to say that protocol efficiency doesn’t exist in a vacuum, it’s a function of the underlying infrastructure. P2P would be fine for an infrastructure in which all links were uniform capacity and fully symmetrical, but that’s not the world we live in. Our networks are tuned for short bursts of upload traffic – requests and commands – and longer bursts of download traffic. So a truly efficient protocol for this infrastructure would simply find the least congested download links and least loaded servers. You can build that by installing lots of caches.

    But infrastructure is a designed artifact as well, so there are all sorts of interesting questions about just how it should be provisioned to support diverse uses.

  • http://blaynesucks.com Aaron Massey

    P2P would be fine for an infrastructure in which all links were uniform capacity and fully symmetrical, but that’s not the world we live in.

    Yet. The market evolved to build networks that are efficient and practical given real world conditions. These conditions are changing. Newer technologies such as WiMax are making it much cheaper to improve bandwith in the last mile. In these new conditions, the market will still evolve to build networks that are efficient and practical. The only difference is that better protocols such as BT will win.

    Richard, I get the feeling that we’re viewing the network in fundamentally different ways. You have taken the very practical “what can we do right now with the current network infrastructure” view. Everything you have said has been 100% dead-on accurate in terms of the present network infrastructure. I have taken the somewhat impractical “what could we do in the future” view. I’m an academic so I get to do that sort of thing. :-P

    They are certainly both valid views, but the later is really the reason why we don’t want the FCC regulating the Internet. I think you’re nodding in this direction with your last sentence in your last comment. No one knows whether or how quickly new technologies will alter the landscape and upset current accepted wisdom about network infrastructure. As Tim’s original post points out there are always unintended consequences to regulation.

  • http://blaynesucks.com Aaron Massey

    P2P would be fine for an infrastructure in which all links were uniform capacity and fully symmetrical, but that’s not the world we live in.

    Yet. The market evolved to build networks that are efficient and practical given real world conditions. These conditions are changing. Newer technologies such as WiMax are making it much cheaper to improve bandwith in the last mile. In these new conditions, the market will still evolve to build networks that are efficient and practical. The only difference is that better protocols such as BT will win.

    Richard, I get the feeling that we’re viewing the network in fundamentally different ways. You have taken the very practical “what can we do right now with the current network infrastructure” view. Everything you have said has been 100% dead-on accurate in terms of the present network infrastructure. I have taken the somewhat impractical “what could we do in the future” view. I’m an academic so I get to do that sort of thing. :-P

    They are certainly both valid views, but the later is really the reason why we don’t want the FCC regulating the Internet. I think you’re nodding in this direction with your last sentence in your last comment. No one knows whether or how quickly new technologies will alter the landscape and upset current accepted wisdom about network infrastructure. As Tim’s original post points out there are always unintended consequences to regulation.

  • http://bennett.com/blog Richard Bennett

    If you want to talk about efficiency in the delivery of popular content, Aaron, the answer isn’t P2P, it’s multicast.

    P2P is primarily of interest only because it shifts costs from content distributors to ISPs. That’s economic efficiency, not network efficiency. Your model of P2P makes very charitable assumptions about content location. In the general case, the most efficient place to find content is in the network’s core, not at an edge. Do the math and you’ll see that the worst possible location is a far away edge, and there are more of those than there are cores.

    My problem with the FCC regulating networks is the absence of a model for correct behavior. I’m not opposed to rules, because they can be changed if they’re antithetical to progress. But regulation according to airy principles in the hands of unqualified people is a recipe for disaster.

  • http://bennett.com/blog Richard Bennett

    If you want to talk about efficiency in the delivery of popular content, Aaron, the answer isn’t P2P, it’s multicast.

    P2P is primarily of interest only because it shifts costs from content distributors to ISPs. That’s economic efficiency, not network efficiency. Your model of P2P makes very charitable assumptions about content location. In the general case, the most efficient place to find content is in the network’s core, not at an edge. Do the math and you’ll see that the worst possible location is a far away edge, and there are more of those than there are cores.

    My problem with the FCC regulating networks is the absence of a model for correct behavior. I’m not opposed to rules, because they can be changed if they’re antithetical to progress. But regulation according to airy principles in the hands of unqualified people is a recipe for disaster.

  • http://blaynesucks.com Aaron Massey

    I don’t really know much about multicast. My research is in computer security and privacy, but focuses on software engineering rather than networks. I will have to look into it.

    I am not saying that BitTorrent is the perfect protocol, but I do think it has some serious advantages. It is economical, particularly for open source projects that don’t have the resources to have a large data server. It is simply better for popular downloads that will be found locally for most edge nodes. I also like the fact that it effectively is it’s own cache mechanism. Thus, interrupted downloads can be easily continued through a mechanism built into the protocol.

    I’m sure improvements could be made to BitTorrent. Perhaps something that could adjust based on the number of local peers serving the file would be better. My only real interest in the technical debate was to correct a somewhat minor point that the fundamental distinction in P2P networks isn’t downloading versus uploading; it’s “the last mile” versus “in the cloud.”

    In response to your last paragraph, I am opposed to rules because they must be changed if they’re antithetical to progress. Progress is really hard to identify. There’s something that Paul Graham says about startup companies that fits pretty well here:

    “A good startup idea has to be not just good but novel. And to be both good and novel, an idea probably has to seem bad to most people, or someone would already be doing it and it wouldn’t be novel.”

    How can we possibly expect any regulator, even network experts such as yourself, to recognize the good ideas and account for them in their regulatory efforts? Regulation should always be an absolute last resort. This applies not just to regulation based on the airy principles of unqualified people, but also to well-intentioned regulation created by universally recognized experts.

  • http://blaynesucks.com Aaron Massey

    I don’t really know much about multicast. My research is in computer security and privacy, but focuses on software engineering rather than networks. I will have to look into it.

    I am not saying that BitTorrent is the perfect protocol, but I do think it has some serious advantages. It is economical, particularly for open source projects that don’t have the resources to have a large data server. It is simply better for popular downloads that will be found locally for most edge nodes. I also like the fact that it effectively is it’s own cache mechanism. Thus, interrupted downloads can be easily continued through a mechanism built into the protocol.

    I’m sure improvements could be made to BitTorrent. Perhaps something that could adjust based on the number of local peers serving the file would be better. My only real interest in the technical debate was to correct a somewhat minor point that the fundamental distinction in P2P networks isn’t downloading versus uploading; it’s “the last mile” versus “in the cloud.”

    In response to your last paragraph, I am opposed to rules because they must be changed if they’re antithetical to progress. Progress is really hard to identify. There’s something that Paul Graham says about startup companies that fits pretty well here:

    “A good startup idea has to be not just good but novel. And to be both good and novel, an idea probably has to seem bad to most people, or someone would already be doing it and it wouldn’t be novel.”

    How can we possibly expect any regulator, even network experts such as yourself, to recognize the good ideas and account for them in their regulatory efforts? Regulation should always be an absolute last resort. This applies not just to regulation based on the airy principles of unqualified people, but also to well-intentioned regulation created by universally recognized experts.

  • crack

    Regardless of the appropriateness or efficiency of P2P, the key point is still the one made by Peter. Pay per bit schemes would drastically change the incentive to share in a P2P system.

  • crack

    Regardless of the appropriateness or efficiency of P2P, the key point is still the one made by Peter. Pay per bit schemes would drastically change the incentive to share in a P2P system.

  • Brett Glass

    Claims that P2P is “efficient” are complete nonsense. Nothing is more efficient than a simple, direct file transfer from source to destination. Finding, contacting, handshaking with, and downloading bits and pieces of a file from hundreds of machines throughout the network is hundreds of times less efficient. What’s more, because machines at the “edge” of the network are being used to send the data, the MAXIMUM possible amount of resources is being consumed. Claims by BitTorrent that its software is “efficient” are, quite simply, lies. Intentional lies used to market its product.

    P2P has two purposes. One — the original purpose — is to make it difficult to stop the transfer of pirated intellectual property. The second — which is more recent — is to shift the cost of server bandwidth from the provider(s) of content to ISPs. In some cases, the latter purpose serves commercial entities which base their business models on reduced bandwidth costs. In others, the content providers are non-commercial entities that want a free ride (e.g. authors of freely distributed software). But even if they are non-commercial, this does not entitle them to take ISPs’ bandwidth without permission or compensation. Nonprofits have to pay the electric and gas bills, too; they can’t just take these products from the utilities.

    In any event, BitTorrent is suffering layoffs because the FCC decision did not go far enough to sustain its business model. The FCC decision prevented ISPs from singling BitTorrent (as opposed to, say, Limewire) out for special treatment, but still allowed them to charge users extra if they hogged bandwidth as a result of using BitTorrent’s inefficient software. In short, ISPs can still thwart BitTorrent’s attempts to dump costs on them. And since BitTorrent, Inc.’s business model relies on taking bandwidth from ISPs without compensation, this killed the Best Buy deal.

  • Brett Glass

    Claims that P2P is “efficient” are complete nonsense. Nothing is more efficient than a simple, direct file transfer from source to destination. Finding, contacting, handshaking with, and downloading bits and pieces of a file from hundreds of machines throughout the network is hundreds of times less efficient. What’s more, because machines at the “edge” of the network are being used to send the data, the MAXIMUM possible amount of resources is being consumed. Claims by BitTorrent that its software is “efficient” are, quite simply, lies. Intentional lies used to market its product.

    P2P has two purposes. One — the original purpose — is to make it difficult to stop the transfer of pirated intellectual property. The second — which is more recent — is to shift the cost of server bandwidth from the provider(s) of content to ISPs. In some cases, the latter purpose serves commercial entities which base their business models on reduced bandwidth costs. In others, the content providers are non-commercial entities that want a free ride (e.g. authors of freely distributed software). But even if they are non-commercial, this does not entitle them to take ISPs’ bandwidth without permission or compensation. Nonprofits have to pay the electric and gas bills, too; they can’t just take these products from the utilities.

    In any event, BitTorrent is suffering layoffs because the FCC decision did not go far enough to sustain its business model. The FCC decision prevented ISPs from singling BitTorrent (as opposed to, say, Limewire) out for special treatment, but still allowed them to charge users extra if they hogged bandwidth as a result of using BitTorrent’s inefficient software. In short, ISPs can still thwart BitTorrent’s attempts to dump costs on them. And since BitTorrent, Inc.’s business model relies on taking bandwidth from ISPs without compensation, this killed the Best Buy deal.

  • http://blaynesucks.com Aaron Massey

    Howdy Brett!

    Your statements such as the following indicate to me that you are misunderstanding or simply unaware several basic networking concepts:

    Claims that P2P is “efficient” are complete nonsense. Nothing is more efficient than a simple, direct file transfer from source to destination. [...] Claims by BitTorrent that its software is “efficient” are, quite simply, lies. Intentional lies used to market its product.

    I cannot explain the entirety of computer networking to you in this comment, but I will give you some references peer-reviewed academic work analyzing BitTorrent protocols from different institutions unaffiliated with BitTorrent.

    D. Qiu and R. Srikant. Modeling and Performance Analysis of BitTorrent-like Peer-to-Peer Networks. In SIGCOMM ’04: Proceedings of the 2004 conference on Applications, technologies, architectures, and protocols for computer communications, pages 367–378, New York, NY, USA, 2004. ACM.

    Here’s a quote from the introduction of this paper:

    The performance of traditional ?le sharing applications deteriorates rapidly as the number of clients increases, while in a well-designed P2P ?le sharing system,
    more peers generally means better performance.

    D. Arthur and R. Panigrahy. Analyzing BitTorrent and Related Peer-to-Peer Networks. In SODA ’06: Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, pages 961–969, New York, NY, USA, 2006. ACM.

    Here’s a quote from the introduction of this paper:

    Each ?le is distributed on its own network as a number of independent data blocks. A client can share individual data blocks it has fully downloaded even if it has not ?nished downloading the entire ?le. This allows for a parallelism that is impossible if entire ?les are treated as atomic blocks.

    L. Guo, S. Chen, Z. Xiao, E. Tan, X. Ding, and X. Zhang. A Performance Study of BitTorrent-like Peer-to-Peer Systems. Selected Areas in Communications, IEEE Journal on, 25(1):155–169, Jan. 2007.

    Here’s a quote from the introduction of this paper:

    …the direct “tit-for-tat” mechanism of BitTorrent is simple, effective, and robust. In practice, BitTorrent systems scale fairly well during ?ash crowd period and have been widely used for various purposes, such as for distributing large software packages…

    You can probably find these at any university library if you aren’t able to access them online. You will also find that each of these lists specific benefits of BitTorrent while also analyzing weaknesses in the protocol and suggesting improvements. This is how scientists approach questions of efficiency. They do not base their decisions on marketing material and opinion.

    Now, you might, as Richard has, debate what network topologies are better or worse for BitTorrent use, but from a protocol standpoint, the general academic consensus is that BitTorrent is efficient.

    You continue:

    P2P has two purposes.

    This is simply impossible. P2P is a tool, which has no inherent purpose; only a rational being can have an explicit purpose assigned to its actions. A hammer is a tool. It was designed to embed nails into wood, but it could also be re-purposed for beating people to death. Unfortunately, this has actually happened, but that doesn’t mean we should try to “ban” hammers in some fashion.

    To address your specific concerns: Yes, BitTorrent makes it easier to transfer intellectual property. Then again, a car makes it easier to get away from a bank robbery. Should we ban cars?

    Yes, BitTorrent reduces costs to content providers and increases costs for ISPs. Then again, email reduces the cost of individual communication while hurting the profitability of entities like Hallmark and the US Post Office. Should we ban email?

    Your last comment may indicate some of the causes of your misunderstandings:

    The FCC decision prevented ISPs from singling BitTorrent (as opposed to, say, Limewire) out for special treatment, but still allowed them to charge users extra if they hogged bandwidth as a result of using BitTorrent’s inefficient software.

    BitTorrent is indeed a company, but it is also the name of a file sharing protocol. In our discussion of networks we have been talking about the protocol and not the company. BitTorrent, Inc. is the company that makes money selling advertising on their Torrent Entertainment Network and by helping other engineers develop BitTorrent clients for their products.

    LimeWire is a software client that operates on the Gnutella network. The Gnutella network is a Peer-to-Peer network, but it doesn’t split up files in transit. The Peer-to-Peer part is that it allows you to see which files all your peers are sharing. Once you select a file, you are downloading it directly from that person’s system. Recently, the LimeWire software client implemented support for BitTorrent protocol, which would then allow LimeWire clients to use both. Either way, the software client is distinct from LimeWire LLC, which makes money by selling a “PRO” version of their software client.

    I hope that helps clear up some confusion.

  • http://blaynesucks.com Aaron Massey

    Howdy Brett!

    Your statements such as the following indicate to me that you are misunderstanding or simply unaware several basic networking concepts:

    Claims that P2P is “efficient” are complete nonsense. Nothing is more efficient than a simple, direct file transfer from source to destination. [...] Claims by BitTorrent that its software is “efficient” are, quite simply, lies. Intentional lies used to market its product.

    I cannot explain the entirety of computer networking to you in this comment, but I will give you some references peer-reviewed academic work analyzing BitTorrent protocols from different institutions unaffiliated with BitTorrent.

    D. Qiu and R. Srikant. Modeling and Performance Analysis of BitTorrent-like Peer-to-Peer Networks. In SIGCOMM ’04: Proceedings of the 2004 conference on Applications, technologies, architectures, and protocols for computer communications, pages 367–378, New York, NY, USA, 2004. ACM.

    Here’s a quote from the introduction of this paper:

    The performance of traditional ?le sharing applications deteriorates rapidly as the number of clients increases, while in a well-designed P2P ?le sharing system,
    more peers generally means better performance.

    D. Arthur and R. Panigrahy. Analyzing BitTorrent and Related Peer-to-Peer Networks. In SODA ’06: Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, pages 961–969, New York, NY, USA, 2006. ACM.

    Here’s a quote from the introduction of this paper:

    Each ?le is distributed on its own network as a number of independent data blocks. A client can share individual data blocks it has fully downloaded even if it has not ?nished downloading the entire ?le. This allows for a parallelism that is impossible if entire ?les are treated as atomic blocks.

    L. Guo, S. Chen, Z. Xiao, E. Tan, X. Ding, and X. Zhang. A Performance Study of BitTorrent-like Peer-to-Peer Systems. Selected Areas in Communications, IEEE Journal on, 25(1):155–169, Jan. 2007.

    Here’s a quote from the introduction of this paper:

    …the direct “tit-for-tat” mechanism of BitTorrent is simple, effective, and robust. In practice, BitTorrent systems scale fairly well during ?ash crowd period and have been widely used for various purposes, such as for distributing large software packages…

    You can probably find these at any university library if you aren’t able to access them online. You will also find that each of these lists specific benefits of BitTorrent while also analyzing weaknesses in the protocol and suggesting improvements. This is how scientists approach questions of efficiency. They do not base their decisions on marketing material and opinion.

    Now, you might, as Richard has, debate what network topologies are better or worse for BitTorrent use, but from a protocol standpoint, the general academic consensus is that BitTorrent is efficient.

    You continue:

    P2P has two purposes.

    This is simply impossible. P2P is a tool, which has no inherent purpose; only a rational being can have an explicit purpose assigned to its actions. A hammer is a tool. It was designed to embed nails into wood, but it could also be re-purposed for beating people to death. Unfortunately, this has actually happened, but that doesn’t mean we should try to “ban” hammers in some fashion.

    To address your specific concerns: Yes, BitTorrent makes it easier to transfer intellectual property. Then again, a car makes it easier to get away from a bank robbery. Should we ban cars?

    Yes, BitTorrent reduces costs to content providers and increases costs for ISPs. Then again, email reduces the cost of individual communication while hurting the profitability of entities like Hallmark and the US Post Office. Should we ban email?

    Your last comment may indicate some of the causes of your misunderstandings:

    The FCC decision prevented ISPs from singling BitTorrent (as opposed to, say, Limewire) out for special treatment, but still allowed them to charge users extra if they hogged bandwidth as a result of using BitTorrent’s inefficient software.

    BitTorrent is indeed a company, but it is also the name of a file sharing protocol. In our discussion of networks we have been talking about the protocol and not the company. BitTorrent, Inc. is the company that makes money selling advertising on their Torrent Entertainment Network and by helping other engineers develop BitTorrent clients for their products.

    LimeWire is a software client that operates on the Gnutella network. The Gnutella network is a Peer-to-Peer network, but it doesn’t split up files in transit. The Peer-to-Peer part is that it allows you to see which files all your peers are sharing. Once you select a file, you are downloading it directly from that person’s system. Recently, the LimeWire software client implemented support for BitTorrent protocol, which would then allow LimeWire clients to use both. Either way, the software client is distinct from LimeWire LLC, which makes money by selling a “PRO” version of their software client.

    I hope that helps clear up some confusion.

  • http://bennett.com/blog Richard Bennett

    Aaron, your comment to Brett is one of the most arrogant and uninformed things I’ve ever read. Brett has been designing and implementing network systems since the 1980s, when he was one of the lead designers of Texas Instruments’ Token Ring chip set, and you can be sure that he’s heard of multicast, unlike some people I can name.

    The papers you cite tout BitTorrent’s scalability, a point that’s not in dispute. But scalability is not the same thing as efficiency, and that was the point that Brett and I addressed.

    Efficiency is not enhanced by moving content stores from the network’s core to the edge, in fact it’s grossly impaired. Efficiency is not enhanced by opening and closing thousands of virtual circuits to download a single file, it’s impaired. And efficiency is not enhanced by aborting thousands of transfer transactions in the course of a single file download. And efficiency is not enhanced by ignoring the nature of the infrastructure.

    I’d suggest you need to do a bit more research.

  • http://bennett.com/blog Richard Bennett

    Aaron, your comment to Brett is one of the most arrogant and uninformed things I’ve ever read. Brett has been designing and implementing network systems since the 1980s, when he was one of the lead designers of Texas Instruments’ Token Ring chip set, and you can be sure that he’s heard of multicast, unlike some people I can name.

    The papers you cite tout BitTorrent’s scalability, a point that’s not in dispute. But scalability is not the same thing as efficiency, and that was the point that Brett and I addressed.

    Efficiency is not enhanced by moving content stores from the network’s core to the edge, in fact it’s grossly impaired. Efficiency is not enhanced by opening and closing thousands of virtual circuits to download a single file, it’s impaired. And efficiency is not enhanced by aborting thousands of transfer transactions in the course of a single file download. And efficiency is not enhanced by ignoring the nature of the infrastructure.

    I’d suggest you need to do a bit more research.

  • http://www.blaynesucks.com Aaron Massey

    Howdy Richard! (and Brett!)

    First, I apologize if my previous comment has offended either of you. That was not at all my intent.

    Second, let’s take a deep breath! It is obvious that you’re deeply invested in this emotionally, and at this pace I feel like we’re two or three comments away from invoking Godwin’s law. Also, there’s a lot of other stuff on the Internet that’s far more “arrogant and uninformed” than my last post, so let’s try and keep some perspective.

    Third, Brett didn’t exactly attach his résumé to his comment. I simply assumed he had a business background since he was talking about marketing, bandwidth cost, and business failures. I think if you’re re-read his comment you’ll see that this is a reasonable assumption if you can ignore your tacit knowledge of Brett’s background.

    Fourth, and perhaps most importantly, much of this thread is suffering from a miscommunication regarding the word “efficiency.” The real-world use, and indeed the common business and policy understanding (which is what I was assuming Brett meant), of efficiency means something like a measure of “operating properly and timely.” Heck, most of the technically oriented people that I talk to (most of whom are software folks and not networking folks) use this colloquial meaning of efficiency. That is how I intended the word when I first used it in this thread if only because I do not consider myself a network geek. This is sort of a combination of all the different academic measures of a networking protocol including scalability, transfer speed, stability, and many more.

    I get the feeling that you’re using it as a descriptor of some overhead to content ratio (and please feel free to provide a definition of it if you have one you prefer). My networking class (6 years ago) described the sort of overhead-to-content as overhead for a constant amount of content because efficiency is used in so many other ways that is has lost precision. There is certainly less communications overhead in a direct download than in a peer-to-peer system, so perhaps this is a more direct meaning. The papers I cited in my previous post don’t even seem to agree on what “efficiency” means. For example, one describes efficiency as “the probability that two peers can communicate,” and assigns the lowercase Eta to represent this while another uses the same lowercase Eta, but consistently refers to the value as “effectiveness.” Lastly, they even use the term more generally such that it could simply be interpreted as “better.”

    I am a security and privacy researcher. I certainly have a different understanding of authorization and authentication than many technically oriented people who are not as security conscious. I usually define these words when using their precise meaning in an open policy discussion simply because the common, real-world use of these words mixes both of the more precise academic meanings.

    Richard, I urge you to re-read Brett’s post and indeed our conversation to this point to see how a traditional, colloquial understanding of efficiency changes things. I think you’ll see that my comments were honestly intended to be helpful, but appear tragically inappropriate based on our miscommunication regarding “efficiency.” Again, I apologize for any slight you (or Brett) found in my comments.

    Anyhow, getting back to the discussion, one of the reasons that I even cited any papers was that I wanted to provide some avenues of research. I think these are decent papers describing the overall efficiency / effectiveness / performance / “goodness” of BitTorrent. I also think they talk about a lot more than just scalability, and I would be interested to see why you think that’s their only contribution.

    Also, you suggested that I should do more research, but didn’t provide any concrete recommendations as to where. I am certainly interested in your thoughts on good places to start. In particular, you keep mentioning the current infrastructure as something that is important to keep in mind, but much of what I have read recently suggests that there are several changes in the pipeline that would fundamentally alter the last mile infrastructure. I’d be interested if you had something you felt was a good place to start related to this.

  • http://www.blaynesucks.com Aaron Massey

    Maybe this new commenting system will help to prevent such misunderstandings in the future. :-)

  • http://www.blaynesucks.com Aaron Massey

    Howdy Richard! (and Brett!)

    First, I apologize if my previous comment has offended either of you. That was not at all my intent.

    Second, let’s take a deep breath! It is obvious that you’re deeply invested in this emotionally, and at this pace I feel like we’re two or three comments away from invoking Godwin’s law. Also, there’s a lot of other stuff on the Internet that’s far more “arrogant and uninformed” than my last post, so let’s try and keep some perspective.

    Third, Brett didn’t exactly attach his résumé to his comment. I simply assumed he had a business background since he was talking about marketing, bandwidth cost, and business failures. I think if you’re re-read his comment you’ll see that this is a reasonable assumption if you can ignore your tacit knowledge of Brett’s background.

    Fourth, and perhaps most importantly, much of this thread is suffering from a miscommunication regarding the word “efficiency.” The real-world use, and indeed the common business and policy understanding (which is what I was assuming Brett meant), of efficiency means something like a measure of “operating properly and timely.” Heck, most of the technically oriented people that I talk to (most of whom are software folks and not networking folks) use this colloquial meaning of efficiency. That is how I intended the word when I first used it in this thread if only because I do not consider myself a network geek. This is sort of a combination of all the different academic measures of a networking protocol including scalability, transfer speed, stability, and many more.

    I get the feeling that you’re using it as a descriptor of some overhead to content ratio (and please feel free to provide a definition of it if you have one you prefer). My networking class (6 years ago) described the sort of overhead-to-content as overhead for a constant amount of content because efficiency is used in so many other ways that is has lost precision. There is certainly less communications overhead in a direct download than in a peer-to-peer system, so perhaps this is a more direct meaning. The papers I cited in my previous post don’t even seem to agree on what “efficiency” means. For example, one describes efficiency as “the probability that two peers can communicate,” and assigns the lowercase Eta to represent this while another uses the same lowercase Eta, but consistently refers to the value as “effectiveness.” Lastly, they even use the term more generally such that it could simply be interpreted as “better.”

    I am a security and privacy researcher. I certainly have a different understanding of authorization and authentication than many technically oriented people who are not as security conscious. I usually define these words when using their precise meaning in an open policy discussion simply because the common, real-world use of these words mixes both of the more precise academic meanings.

    Richard, I urge you to re-read Brett’s post and indeed our conversation to this point to see how a traditional, colloquial understanding of efficiency changes things. I think you’ll see that my comments were honestly intended to be helpful, but appear tragically inappropriate based on our miscommunication regarding “efficiency.” Again, I apologize for any slight you (or Brett) found in my comments.

    Anyhow, getting back to the discussion, one of the reasons that I even cited any papers was that I wanted to provide some avenues of research. I think these are decent papers describing the overall efficiency / effectiveness / performance / “goodness” of BitTorrent. I also think they talk about a lot more than just scalability, and I would be interested to see why you think that’s their only contribution.

    Also, you suggested that I should do more research, but didn’t provide any concrete recommendations as to where. I am certainly interested in your thoughts on good places to start. In particular, you keep mentioning the current infrastructure as something that is important to keep in mind, but much of what I have read recently suggests that there are several changes in the pipeline that would fundamentally alter the last mile infrastructure. I’d be interested if you had something you felt was a good place to start related to this.

  • http://www.blaynesucks.com Aaron Massey

    Maybe this new commenting system will help to prevent such misunderstandings in the future. :-)

  • Pingback: Dolls & Bears

  • Pingback: no no hair removal does not work

  • Pingback: CT-50 Review Veras Fitness

  • Pingback: premier league indonesia

Previous post:

Next post: