Internet growth: Fast or faster?

by on June 17, 2008 · 22 comments

Cisco continues to do interesting work estimating the impact of video on Internet traffic. With the release of two new detailed reports, updating last year’s “Exabyte Era” paper, they’ve now created a “Visual Networking Index.” These reports follow my own series  of articles and reports on the topic. 

Cisco’s Internet traffic growth projections for the next several years continue to be somewhat lower than mine. But since their initial report last August, they have raised their projected compound annual growth rate from 43% to 46%. Cisco thus believes world IP traffic will approach half a zettabyte (or 500 exabytes) by 2012. My own projections yield a compound annual growth rate for U.S. IP traffic of around 58% through 2015. This slightly higher growth rate would produce a U.S. Internet twice as large in 2015 compared to Cisco’s projections. Last winter George Gilder and I estimated that world IP traffic will pass the zettabyte (1,000 exabytes) level in 2012 or 2013.

For just one example of the new applications that will drive IP traffic growth, look at yesterday’s announcement by Advanced Micro Devices (AMD). Partnering with my friend, the young graphics pioneer Jules Urbach, AMD previewed its Cinema 2.0 project, which combines the best of cutting edge technology and thinking from video games, movies, graphics processors, and computer generated imaging — with lots of artistic insight and inspiration — to create new kinds of interactive real-life real-time 3D virtual worlds, all powered not by supercomputers but simple video cards that you find in PCs and Macs, or from servers in the “cloud.”

A photorealistic 3D robot and city scene rendered in real-time. (AMD; Business Wire)

A photorealistic 3D robot and city scene rendered in real time. (AMD; Business Wire)

The huge increases in bandwidth and robust traffic management needed to deliver these new high-end real-time services continue to show why net neutrality regulation and other artificial limitations on traffic management are complete non-starters from a technical perspective.

  • Tim Lee

    Bret, could you elaborate on what you mean by “robust traffic management,” and how AMD’s Cinema 2.0 initiative creates a need for it? All this real-time rendering stuff will be done on the client side, so it’s not clear to me that it would require any more bandwidth or lower latency than the current generation of games. How specifically would network neutrality regulation hamper the development of games based on Cinema 2.0?

  • Bret Swanson

    Tim, good question. But that is just one of this paradigm’s many new angles and features. I held the view posed in your question — that only the control data of online games and worlds would need to traverse he Net — until I met Jules Urbach, the inventor of much of the technology used here, in Hollywood a little over a year ago.

    In fact, many of the interactive games and virtual worlds that will use the new systems will be processed and rendered at data centers in the cloud. Intel, AMD, Google, nVidia, and others are experimenting with graphics processors as a new “general purpose” engine not just in game boxes and PCs but in centralized clusters.

    There will be different mixes of centralized and local processing. Some will be all local; some all centralized; some a coordinated combination, depending on factors like latency, the degree of interactivity, and the graphics capabilities of the end-user device. Major gaming companies are already adopting the concept for the next generation of games. (Among other advantages, imagine how a centrally hosted, DVD-less, streamed system will limit piracy of video games in Asia.)

    Of course, hosting real-time 3D virtual worlds and interactive games in the cloud requires huge amounts of bandwidth — and traffic management. One virtual world based on this concept with, say, one million users, could generate Internet traffic of 100 petabytes per month. That’s around 10% of the entire U.S. Internet in 2006.

    Best,

    Bret

  • http://www.tc.umn.edu/~leex1008 Tim Lee

    Bret, could you elaborate on what you mean by “robust traffic management,” and how AMD’s Cinema 2.0 initiative creates a need for it? All this real-time rendering stuff will be done on the client side, so it’s not clear to me that it would require any more bandwidth or lower latency than the current generation of games. How specifically would network neutrality regulation hamper the development of games based on Cinema 2.0?

  • Bret Swanson

    Tim, good question. But that is just one of this paradigm’s many new angles and features. I held the view posed in your question — that only the control data of online games and worlds would need to traverse he Net — until I met Jules Urbach, the inventor of much of the technology used here, in Hollywood a little over a year ago.

    In fact, many of the interactive games and virtual worlds that will use the new systems will be processed and rendered at data centers in the cloud. Intel, AMD, Google, nVidia, and others are experimenting with graphics processors as a new “general purpose” engine not just in game boxes and PCs but in centralized clusters.

    There will be different mixes of centralized and local processing. Some will be all local; some all centralized; some a coordinated combination, depending on factors like latency, the degree of interactivity, and the graphics capabilities of the end-user device. Major gaming companies are already adopting the concept for the next generation of games. (Among other advantages, imagine how a centrally hosted, DVD-less, streamed system will limit piracy of video games in Asia.)

    Of course, hosting real-time 3D virtual worlds and interactive games in the cloud requires huge amounts of bandwidth — and traffic management. One virtual world based on this concept with, say, one million users, could generate Internet traffic of 100 petabytes per month. That’s around 10% of the entire U.S. Internet in 2006.

    Best,

    Bret

  • eric

    “…from a technical perspective.” That says it all. Most Americans who’ve thought about net neutrality do not place the technical perspective above other perspectives.

    That said, if my ISP offered a lower price for a “traffic managed” connection, I might be amenable. I find that possibility unlikely in the duopolist broadband environment that we (and many others) have locally.

    We should sacrifice neutrality because some gamester might not be able to play the next cool new 3D fantasy? That’s not a perspective I share, to be sure.

  • eric

    “…from a technical perspective.” That says it all. Most Americans who’ve thought about net neutrality do not place the technical perspective above other perspectives.

    That said, if my ISP offered a lower price for a “traffic managed” connection, I might be amenable. I find that possibility unlikely in the duopolist broadband environment that we (and many others) have locally.

    We should sacrifice neutrality because some gamester might not be able to play the next cool new 3D fantasy? That’s not a perspective I share, to be sure.

  • http://DSLPRIME.com Dave Burstein

    Bret

    I think the traffic growth projections are sensible, but I don’t see the connection between that and traffic management/policy you put forth. (The policy issue is turns out not to be network neutrality/traffic management but rather the pricing of the service to the user who watches video, incidentally. But that’s another discussion.)

    I’m on the technical side, and the vast majority of technical people think capacity will also grow very rapidly and can inexpensively handle the traffic even with these high estimates. The bulk of the cost of new capacity on the network is upgrading switches, routers, and similar equipment. That’s all responsive to Moore’s Law and has also been growing exponentially. Since 2002, the cost per customer of bandwidth has probably been coming down despite growth at a rate similar to Cisco’s future projections.

    In addition, bandwidth is such a small part of the cost of broadband that even tripling the cost of the network bandwidth wouldn’t have an effect large enough to affect policy towards large carriers. The industry numbers are bandwidth is only 1-3% of the price charged for broadband.

    I don’t want to get into a net neutrality debate – we’ve all had too many. But I’m asking you to follow up with the technical evidence that we can’t handle this level of traffic in a practical way.

    What’s driving you to that conclusion? I just haven’t seen any hard evidence, and nearly all the technical people think otherwise.

    (First mile/DSL speeds are a different question. Building fiber is expensive, but that’s mostly irrespective of the amount of bandwidth used once cable moves to DOCSIS 3.0)

  • http://DSLPRIME.com Dave Burstein

    Bret

    I think the traffic growth projections are sensible, but I don’t see the connection between that and traffic management/policy you put forth. (The policy issue is turns out not to be network neutrality/traffic management but rather the pricing of the service to the user who watches video, incidentally. But that’s another discussion.)

    I’m on the technical side, and the vast majority of technical people think capacity will also grow very rapidly and can inexpensively handle the traffic even with these high estimates. The bulk of the cost of new capacity on the network is upgrading switches, routers, and similar equipment. That’s all responsive to Moore’s Law and has also been growing exponentially. Since 2002, the cost per customer of bandwidth has probably been coming down despite growth at a rate similar to Cisco’s future projections.

    In addition, bandwidth is such a small part of the cost of broadband that even tripling the cost of the network bandwidth wouldn’t have an effect large enough to affect policy towards large carriers. The industry numbers are bandwidth is only 1-3% of the price charged for broadband.

    I don’t want to get into a net neutrality debate – we’ve all had too many. But I’m asking you to follow up with the technical evidence that we can’t handle this level of traffic in a practical way.

    What’s driving you to that conclusion? I just haven’t seen any hard evidence, and nearly all the technical people think otherwise.

    (First mile/DSL speeds are a different question. Building fiber is expensive, but that’s mostly irrespective of the amount of bandwidth used once cable moves to DOCSIS 3.0)

  • Albert A. Llamas

    nice article! nice site. you're in my rss feed now ;-)
    keep it up

  • azadachan

    petabytes how many in gikabytes

    ———————————————————-
    scratch and dent appliances | craftsman sears

  • http://www.scratchanddentappliances.net scratch and dent

    petabytes how many in gikabytes

    ———————————————————-
    scratch and dent appliances | craftsman sears

  • Pingback: click here

  • Pingback: Rosehill Guineas

  • Pingback: selkäkipu.info

  • Pingback: Green tea extract review

  • Pingback: buy max workouts

  • Pingback: klasemen premier league

  • Pingback: James Hamilton

  • Pingback: check my video to discover how to win at roulette in no time

  • Pingback: Art For Sale

  • Pingback: How To Make A Million Dollars In A Week

  • Pingback: casinot

Previous post:

Next post: