Adam Marcus – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Mon, 29 Mar 2021 14:55:32 +0000 en-US hourly 1 6772528 Another NFT Explainer https://techliberation.com/2021/03/29/another-nft-explainer/ https://techliberation.com/2021/03/29/another-nft-explainer/#comments Mon, 29 Mar 2021 14:55:31 +0000 https://techliberation.com/?p=76855 Post image for Another NFT Explainer

I don’t understand the hype surrounding Non-Fungible Tokens (NFTs). As someone who has studied copyright and technology issues for years, maybe because it doesn’t seem very new to me. It’s just a remixing of some ideas and technologies that have been around for decades. Let me explain.

For at least 100 years, “ownership” of real property has been thought of as a “bundle of rights.” As a simple example, you may “own” the land your house sits on, but the city probably has a right to build and maintain a sidewalk across your yard and the general public has a right to walk across your property on that sidewalk. The gas company has the right to walk into your side yard to read your gas meter. Pilots have a right to fly over your house. Some other company or companies may have rights to any water and minerals in the ground below your house. Your homeowners association may even have a right to dictate what color you paint the exterior of your house.

This same “bundle of rights” concept also applies to copyright. Unless explicitly granted by contract, buying an original painting doesn’t mean you have the right to take a photograph of the painting and sell prints of the photograph. If you buy a DVD, you have the right to watch the DVD privately and you have the right to sell the DVD when you’re no longer interested in it. (That second right is called the “first sale doctrine” and there have been numerous Supreme Court cases and laws defining it’s exact boundaries.) But unless explicitly granted by contract, purchasing a DVD doesn’t mean you have the right to set up a projector and big screen and charge members of the public to watch it. That requires a “public performance” right.

When you buy most NFTs, you get very few of the rights that typically come with ownership. You might only get the right to privately exhibit the underlying work. And if you decide to later resell the NFT, the contract (which is embedded in digital code of the NFT) may stipulate that the original artist gets a 10% royalty on every future sale of the work.

The second thing you need to understand is the concept of “artificial scarcity.” As a simple example, in the art world, it’s common for photographers and painters to sell numbered, “limited edition” prints of their works. There’s no technological reason why they couldn’t print 1,000 copies of their work, or even register the print with a “print on demand” service that will continue making and selling prints as long as there are people who want to buy them. But limiting the number of prints made (even if each print is identical to any other print), is likely to raise the price. This is artificial scarcity. Most NFTs are an edition of one. Even if there are other exact copies of the underlying artwork sold as NFTs, each NFT is unique. This is like an artist selling numbered prints but not putting a limit on how many numbered prints they make. Each numbered print is technically unique because each has a different number. But without some artificial scarcity, the value of any one print may stay very low.

So if buying a NFT doesn’t get you any real rights and the scarcity is purely artificial, why are NFTs selling for hundreds of thousands of dollars? Here’s where all the technology really makes a difference. If you spend millions on a Picasso painting, you’re taking a lot of risks. First, you’re taking the risk that it’s a forgery, which would drop the value to near-zero. Second is the risk that the painting will be stolen from you. Insurance can help deal with both problems, but that adds more complications. If you’re buying the painting as an investment, these complications reduce the “liquidity” of the asset. Liquidity is the ease with which an asset can be converted into cash without affecting the market value of the asset. Put more simply, liquidity is how easily the asset can be sold. Cash has long been considered the most liquid asset, but NFTs are arguably much /more/ liquid than cash. NFTs don’t require anything physical to trade hands. And even electronic currency transfers take time and are subject to government oversight. NFTs are so new, they’re barely regulated. But by using blockchain technology, they can be easily and safely bought and sold anonymously. NFTs are a money launderers dream. It’s unclear if NFTs are actually being used to launder money, but it’s a concern.

The other reason I think NFTs are so popular is speculation. Because NFTs are so liquid and because there basically doesn’t even need to be an underlying work, the initial cost to “mint” (create) a NFT is near zero. And by using blockchain systems, NFTs can be resold with little overhead. (Though they can also be configured to ensure a certain overhead, e.g. that 10% of every resale goes to the original artist.) These characteristics, along with the newness of NFTs make it a popular marketplace for speculators, people who purchase assets with the intent of holding them for only a short time and then selling them for a profit.

NFTs started to enter the public consciousness in February 2021, after the 10-year old “Nyan cat” animation sold for over half a million dollars. This is also just a few weeks after the Gamestop stock short squeeze made a compelling case that average investors, working in concert, could upset the stock market and make millions. So it’s no wonder that there is rampant speculation in NFTs.

In conclusion, NFTs will be a tremendous benefit to digital artists, who did not previously have a way to easily prove the authenticity of their works (which is of tremendous importance to investors) or to provide a digital equivalent to numbered prints in the physical art world. But the hype about NFTs is just that. It’s driven by speculators and you’d be crazy to think of this as a worthy investment opportunity.

]]>
https://techliberation.com/2021/03/29/another-nft-explainer/feed/ 1 76855
Economics of Privacy Conference Livestream Begins at 11am EST Today https://techliberation.com/2011/12/02/economics-of-privacy-conference-livestream-begins-at-11am-est-today/ https://techliberation.com/2011/12/02/economics-of-privacy-conference-livestream-begins-at-11am-est-today/#comments Fri, 02 Dec 2011 09:15:59 +0000 http://techliberation.com/?p=39220

TechFreedom president and TLF contributor Berin Szoka will be speaking today at the Economics of Privacy conference hosted by the Silicon Flatirons center at the University of Colorado and co-sponsored by TechFreedom. The entire conference will be livestreamed (embedded below) begining at 11am EST; Berin’s panel begins at 4:30pm EST. Highlights include a keynote conversation with FTC Commissioner Julie Brill and keynote speeches by FTC Bureau of Economics Director Joseph Farrell and Carnegie Mellon University Information Technology and Public Policy Associate Professor Alessandro Acquisti. Check the schedule for full details. The Twitter hashtag for the event is #flatirons.

]]>
https://techliberation.com/2011/12/02/economics-of-privacy-conference-livestream-begins-at-11am-est-today/feed/ 1 39220
FCC Requires Online Public Inspection Files, But Misses Point of OpenGov: Data Accessibility https://techliberation.com/2011/10/31/fcc-requires-online-public-inspection-files-but-misses-point-of-opengov-data-accessibility/ https://techliberation.com/2011/10/31/fcc-requires-online-public-inspection-files-but-misses-point-of-opengov-data-accessibility/#comments Mon, 31 Oct 2011 14:45:36 +0000 http://techliberation.com/?p=38869

At last Thursday’s FCC Open Commission Meeting, the Commission proposed to require television stations to make their “public inspection file” available online. But availability is not accessibility. If the FCC follows its usual practice of having filers submit PDFs (many of which are often scanned from printed documents), this data may be nearly useless to the small number of researchers who would really benefit from having a large set of public inspection files available online.

The public inspection file, a traditional hallmark of broadcast regulation, is a collection of documents that all radio and television stations must maintain and make available to anyone who asks to see it. Under the FCC’s existing rules, the file must contain, among other things:

  1. A complete record of airtime purchases by, or on behalf of, any political candidates or political issues of national importance.
  2. A quarterly report on the “programs containing [the station’s] most significant treatment of community issues.”

Neither is filed with the FCC and thus both are available only directly from the station. But accessing the file shouldn’t be difficult. The FCC’s rules clearly state that the file should be “available to members of the public at any time during regular business hours.” You can even ask the station staff to make photocopies for you (though you’ll have to pay for the copies).

Although the proposal voted on yesterday hasn’t yet been released publicly, the plan as reported would require stations to submit information to the FCC, which will develop a publicly-searchable online database of the submissions. This is in marked contrast to a 2007 FCC rule which required stations to put public inspection files on their own websites. Broadcasters sued to block the implementation of the previous rule, arguing that the FCC underestimated the paperwork burden and voicing First Amendment concerns. The court sent the previous rule back to the FCC for revision, which the FCC had been silent about until today.

At yesterday’s meeting, Commissioner Clyburn complained that most public inspection files are “in the deep recesses of broadcast stations, in dilapidated filing cabinets,” but this complaint misses the point. Stations don’t bring members of the public into their “deep recesses.” They ask them to have a seat in their lobby while some station staffer retrieves the file. What could be easier? You might even see a local celebrity while you wait!

But in fact, the public inspection files of most stations are almost never requested. In 2007, Viacom stated that visits to its stations’ public inspection files were “exceedingly rare … less than one annually, virtually all of whom are college students on assignment.” That’s probably because most people don’t know of their existence.

Just putting the public inspection files online would certainly make them more accessible and might lead to more public review. But the real beneficiaries of the FCC’s new rule are researchers. The primary “research” that researchers do is comparing data across stations and/or across time to identify larger trends. But if the new system the FCC develops for the public inspection file data is like many of the FCC’s existing online databases, it will be of limited use. Most FCC databases allow users to only search for a single licensee and most of the data they contain is only available in separate PDF files for each licensee. This makes it very time-consuming to, for example, determine which station in a city receives the most political airtime purchases, or to track political airtime purchases over time.

This outdated approach to disclosure is fundamentally inconsistent with the broader efforts of the Obama Adminstration to implement Open Government through meaningful disclosure. The Open Government Directive directs agencies to

“publish information online in an open format that can be retrieved, downloaded, indexed, and searched by commonly used web search applications. An open format is one that is platform independent, machine readable, and made available to the public without restrictions that would impede the re-use of that information.”

In a 2010 memo to the heads of executive departments and agencies, Cass Sunstein, head of the Office of Information and Regulatory Affairs, explained “There is a difference between making a merely technical disclosure–that is, making information available somewhere and in some form, regardless of its usefulness–and actually informing choices.” Sunstein’s vision of transparency isn’t merely a means of opening up government, but also a less restrictive alternative to prescriptive regulation. That’s exactly why we have public files in the first place: because forced disclosure is less restrictive than limiting political advertisements or meddling (even more) with public interest programming.

The idea of transparency as an alternative to regulation goes all the way back to President Clinton’s Executive Order 12866 (1993): “Each agency shall identify and assess available alternatives to direct regulation, including … providing information upon which choices can be made by the public.” And in another memo to agency heads from just last month, Sunstein explained the benefits of Smart Disclosure, which

“makes information not merely available, but also accessible and usable, by structuring disclosed data in standardized, machine readable formats. … In many cases, smart disclosure enables third parties to analyze, repackage, and reuse information to build tools that help individual consumers to make more informed choices in the marketplace.”

Thus, if the FCC is going to require stations to collect certain information and make that information available to the public, it should make that information accessible too. That means requiring stations to submit the data in machine-readable format and ensuring that the submitted data is then made available in compliance with the 8 Principles of Open Government Data. While these principles were not developed by the Federal government, they are in keeping with the spirit of the FCC’s Data Innovation Initiative. FCC Chairman Julius Genachowski is exactly right: “public data should be accessible to the public in meaningful ways using modern digital tools.”

Fine words, Mr. Chairman. Why not start with public files?

]]>
https://techliberation.com/2011/10/31/fcc-requires-online-public-inspection-files-but-misses-point-of-opengov-data-accessibility/feed/ 1 38869
TechFreedom/FOSI COPPA (Livecasted) Panel in DC October 12 https://techliberation.com/2011/10/07/techfreedomfosi-coppa-livecasted-panel-in-dc-october-12/ https://techliberation.com/2011/10/07/techfreedomfosi-coppa-livecasted-panel-in-dc-october-12/#respond Fri, 07 Oct 2011 15:56:51 +0000 http://techliberation.com/?p=38632

TechFreedom, in association with the Family Online Safety Institute (FOSI), will host a lunch panel with a number of leading experts to discuss the FTC’s recently-proposed revisions to the Children’s Online Privacy Protection Act (COPPA). Opening remarks will be delivered by the Federal Trade Commission’s Phyllis Marcus, a Senior Staff Attorney at the Division of Advertising Practices. Afterwards, the panel will discuss the FTC’s proposals and what they mean for children, parents, Internet companies and innovation.

FOSI CEO Stephen Balkam will serve as master of ceremonies. The panel will be moderated by Berin Szoka, President of TechFreedom, and will include:

The event will take place at the Top of the Hill Banquet and Conference Center at the Reserve Officers Association (One Constitution Ave NE, Washington DC 20002) on Wednesday, October 12 from 12:30 to 2:30pm, and include a complimentary lunch. Space is limited so please click here to register.

In addition, you can let everyone else know you’ll be coming or watching the livestream (page will be updated when event begins) by joining the Facebook event page.

You can also keep up with the event by following the Twitter discussion at the #COPPA hashtag.

]]>
https://techliberation.com/2011/10/07/techfreedomfosi-coppa-livecasted-panel-in-dc-october-12/feed/ 0 38632
Alcohol Liberation Front 14 @ Johnny’s Half Shell on 9/14 https://techliberation.com/2011/09/11/alcohol-liberation-front-14-johnnys-half-shell-on-914/ https://techliberation.com/2011/09/11/alcohol-liberation-front-14-johnnys-half-shell-on-914/#respond Mon, 12 Sep 2011 01:04:12 +0000 http://techliberation.com/?p=38328

If you’re in DC this week, join Kevin Bankston from EFF, myself, fellow TLFers Berin Szoka, Geoff Manne, and Larry Downes, starting at 5:30pm at Johnny’s Half Shell, 400 North Capitol St NW. This event is being co-hosted by TLF and the Electronic Frontier Foundation (EFF). Please RSVP on Facebook so we have an idea how many people are attending. Attendees must be 21 or older. Space is limited.

And ALF 15 is already in the works. We’re planning to do it in conjunction with Digital Capital Week on November 8th. Stay tuned for more details!

]]>
https://techliberation.com/2011/09/11/alcohol-liberation-front-14-johnnys-half-shell-on-914/feed/ 0 38328
TechFreedom Event on 7/19 – Sorrell: The Supreme Court Confronts Free Speech, Marketing & Privacy https://techliberation.com/2011/07/15/techfreedom-event-on-719-sorrell-the-supreme-court-confronts-free-speech-marketing-privacy/ https://techliberation.com/2011/07/15/techfreedom-event-on-719-sorrell-the-supreme-court-confronts-free-speech-marketing-privacy/#respond Fri, 15 Jul 2011 16:35:37 +0000 http://techliberation.com/?p=37814

The Supreme Court’s 6-3 decision in Sorrell v. IMS Health has been heralded as a major victory for commercial free speech rights and raised serious questions about how to reconcile privacy regulations with the First Amendment. The high Court struck down a Vermont law requiring that doctors opt in before drug companies could use data about their prescription patterns to market (generally name-brand) drugs to them. But what does the Court’s decision really mean for the regulation of advertising, marketing, and data flows across the economy? Has free speech doctrine fundamentally changed? Will existing privacy laws be subject to new legal challenges? How might the decision affect the ongoing debate about privacy regulation in Congress and at the FTC?

These are some of the questions that will be addressed by leading thinkers on First Amendment law and privacy at an event hosted by TechFreedom, a new digital policy think tank, and the law firm of Hunton & Williams LLP. The event will take place on Tuesday, July 19 from 12 to 3 p.m. at Hunton & Williams’s newly opened offices at 2200 Pennsylvania Ave NW, Washington DC. Complimentary lunch will be served.

The event will include two panels:

  • Panel 2: Privacy after Sorrell: Reconciling Data Restrictions & the First Amendment

TechFreedom filed an amicus curiae brief with the Supreme Court in this case (our media statement), led by Richard Ovelmen, and previously joined with other free speech groups in an amicus brief before the Second Circuit.

To Register: Space is limited. To guarantee a seat, register online here

]]>
https://techliberation.com/2011/07/15/techfreedom-event-on-719-sorrell-the-supreme-court-confronts-free-speech-marketing-privacy/feed/ 0 37814
Leaked Schwarzenegger v. EMA Press Release https://techliberation.com/2011/06/12/leaked-schwarzenegger-v-ema-press-release/ https://techliberation.com/2011/06/12/leaked-schwarzenegger-v-ema-press-release/#comments Sun, 12 Jun 2011 22:47:34 +0000 http://techliberation.com/?p=37300

The Supreme Court will be issuing its opinion in the case Brown v. Entertainment Merchants Association any day now (TLF’s previous coverage is here). The case was previously known as Schwarzenegger v. Entertainment Merchants Association, but Mr. Schwarzenegger has been trying to stay out of court of late. I was just sent a draft of the statement that the Eagle Forum Education & Legal Defense Fund, which filed an amicus brief in the case, is planning to release if the decision goes its way. The Eagle Forum Education & Legal Defense Fund was founded by Phyllis Schlafly.

[Not really. This is a joke (but the quotes are true).]


[date] – The Eagle Forum Education & Legal Defense Fund (we just say “F’ed”) is happy to see that the U.S. Supreme Court has finally recognized that children are precious angels and need to be protected from reality. Its opinion in the Brown v. Entertainment Merchants Association case, released today, holds that states are free to ignore the First Amendment when it comes to children. While F’d has long advocated fidelity to the text of the U.S. Constitution, it believes “traditional values” are more important than some document written 224 years ago. In response to the many calls that our position in support of California’s attempt to ban video games is hypocritical considering our mission is “to enable conservative and pro-family men and women to participate in the process of self-government and public policy making so that America will continue to be a land of individual liberty,” and we support “parents’ rights to guide the education of their own children … and to home-school without oppressive government regulations,” we say don’t listen to what we’ve said. “Be a ‘doer, not a hearer only.'” (You can’t argue with that–it’s from the bible.)

As we stated in our amicus brief filing, “violent video games are the equivalent of ‘fighting words’ for kids who play them.” To that end, and emboldened by today’s decision, we are calling on Congress to “stop the violence” by enacting Federal restrictions similar to the California statute just upheld by the Supreme Court. Federal legislation is needed because digital downloads already represent 29% of game sales. Virtual “app stores” offered by companies such as Apple, Google, Amazon, and others allow children to access violent video games any time and from anywhere. Here are just a few examples of the sorts of games that are available on mobile phone app stores. They all “appeal[] to a deviant or morbid interest of minors” (to quote the language of the California law). Are your children playing these violent video games?

  • Office Jerk – Now known by the slighly less-offensive name “OfficeJK”, the point of this game for the Android platform is to throw food, golf balls, a bug, a stapler, and even dynamite at a defenseless and nonviolent officemate.
  • Dig Dug – Video games have been violent from the very beginning, with the first videogame “Space War.” Dig Dug, originally released in 1982 requires the player to kill “monsters” by either inflating them until they pop or dropping rocks on top of them (more on “crush videos” to come). Due to its age and the many emulators available for smartphones, this game is probably available for every platform including graphing calculators.
  • Plants vs. Zombies – Another game available on a wide variety of platforms from PCs to game consoles to portable devices and phones, this game has been nominated for multiple Interactive Achievement Awards, the “Casual Game of the Year Award, and was one of the Best games of 2009 according to website Gamezebo. As you might surmise from the title, the game involves killing zombies. But you don’t just kill them by throwing fruits and vegetables at them (though that certainly does the trick on the early levels). Gameplay also involves explosions, rolling over zombies with giant walnuts, and literally mowing them down with lawn mowers. And the visuals are particularly disturbing, with limbs blown off and zombies literally turned to dust by explosions. It also makes fun of a special needs individual named “Crazy Dave.”
  • Angry Birds – After the legality of “crush films” was barely upheld in 2008, it’s surprising this game even exists. The user scores point by flinging small birds into buildings and various other structures in an attempt to get the structures to collapse on top of pigs. To quote the California law, this is “especially heinous, cruel, or depraved in that it involves torture or serious physical abuse to the victim[s].”

It’s worth pointing out that with the exception of Office Jerk, which is unrated, all of the above games have been rated “Everyone” by the Entertainment Software Rating Board.

Phyllis Schlafly is not available for comment because she gets more self-fulfillment from “the daily duties of a wife and mother in the home.”

]]>
https://techliberation.com/2011/06/12/leaked-schwarzenegger-v-ema-press-release/feed/ 5 37300
3D Printing: The Future is Here https://techliberation.com/2011/06/10/3d-printing-the-future-is-here/ https://techliberation.com/2011/06/10/3d-printing-the-future-is-here/#comments Fri, 10 Jun 2011 04:47:47 +0000 http://techliberation.com/?p=37296

Have you heard about 3D printing yet? Bre Pettis, founder of Makerbot, a company that sells a $1300 home 3D printer, was Wednesday night’s guest on the The Colbert Report. And back in April, Public Knowledge kicked off what’s sure to be a long public debate over the legal and policy questions raised by 3D printing with a half-day conference here in D.C.

Also called “additive manufacturing,” 3D printing is the process of “printing” a three-dimensional object layer-by-layer with equipment that’s not much different from ink-jet printers. Combine 3D printing with 3D scanning and you’ve got the first real step towards something that seems at first like total science fiction: A Star Trek replicator.

The Future is Here

The current state of the art in 3D scanning and printing is already quite advanced. There’s already a growing legion of hobbyists building 3D printers in their basements and sharing object designs across the Internet. As in the early days of radio, these hobbyists are creators, not just consumers.

On the scanning side, for $2,995 you can buy your own NextEngine 3D scanner, which can do a completely automated 360-degree scan of (the surface of) objects up to 5.1” x 3.8” in size at 0.005 inch accuracy in under 30 minutes. For larger objects, you can scan a portion at a time and automatically stitch the scans together via software. The easy-to-use device connects to Windows PCs via USB and includes all necessary software. Don’t have $2,995? Not a problem. You can make your own 3D scanner with a cheap laser pointer, wine glass, videocamera or webcam, and a record player. No, really! Or just use the Microsoft Kinect Xbox 360 peripheral (example, source code). Too much effort? Get the $0.99 iPhone app or do it for free with any digital still camera and the My3DScanner website.

Once you’ve got a scan of the object you want to replicate (or you’ve created your own design using free software), you need a 3D printer. The printer that Jay Leno uses to replicate parts for his classic cars is now available for less than $15,000. For $1,299, you can buy a MakerBot Thing-O-Matic kit that you assemble (no pun intended) yourself. Still too much? You don’t need to buy your own 3D printer when you can use someone else’s. There are now several online 3D printing services where you can upload your scan or custom-designed 3D model and get the 3D printed model mailed to you. Pricing is based on the size of your object and the materials used and start at around $0.80 per cubic centimeter. Small trinkets and pieces of jewelry can be made for less than $30 each.

3D scanning and printing is already in use commercially today. There are already more than 10 million 3D printed hearing aids in use worldwide. Boeing is manufacturing some airplane parts using 3D printing. Again, Jay Leno is replicating worn-out parts to restore classic cars. And there’s at least one vehicle that will have all of its exterior components made by 3D printers.

The next step in 3D printing is combining multiple materials, which will allow 3D printers to print working electronic devices. The end-game is a self-replicating machine, long an engineering dream. The open design RepRap project has already succeeded in printing all of the custom plastic parts needed to make another RepRap machine.

3D Printing and Intellectual Property

Just as the tape recorder did for music and the videocamera did for television and movies, 3D replicators will soon make duplication of physical objects much easier—with major, and probably obvious, intellectual property ramifications. Just as most people think nothing of photocopying a page from a book or magazine, in a few years people may think nothing of using a 3D copier to make a copy of an earring when the other is lost, making a left-handed copy of a pair of right-handed scissors, or simply buying a single candlestick at a store and then making five more copies at home. While current home 3D printers and scanners may not yet be up to the task of these examples (which all involve relatively simple exterior-scanning), it’s worth pointing out that there’s no Digital Rights Management (DRM) in current devices.

Digital Rights Management is a catch-all term for manufacturer-designed built-in restrictions meant to ensure that devices can not be used to infringe copyrights. That’s why, for example, you can’t simply copy a purchased iPhone app to a friend’s iPhone. It’s not difficult to envision a future where home 3D printers are restricted to printing only pre-approved designs and/or are limited in the resolution and/or materials so that they are really only good for trinkets. This sort of thing has happened with previous technologies. Soon after VCRs entered the market, Hollywood tried very hard to control them (see Sony v. Universal).

Sure, such DRM controls could be “hacked” just as people have hacked DRM systems on cellphones, ereaders, videogame systems, etc. There’s even a great short story (or audiobook) by Bruce Sterling about a “fabrikator” owner hacking his device to print all sorts of things it’s not supposed to print. Neal Stephenson’s novel The Diamond Age revolves in large part over efforts to overcome the restrictions on nanotech-powered “matter compilers” intended to prevent users from compiling weapons or harmful substances.

It is also possible to envision a spectrum of DRM protections, ranging from—to use Jonathan Zittrain’s much-ballyhooed terminology—perfectly “generative” 3D printers limited only by their technological capabilities to more “appliancized,” regulated printers that are restricted to printing only approved designs in approved quantities. This is no different from how other industries operate today: Authors can decide to only distribute their books through DRM-protected services to DRM-protected ereaders or they can make them available without DRM. Recording artists and television and movie producers can do the same. Although there are a few cases of government-mandated DRM (e.g. the broadcast flag for television), most DRM systems are voluntary and market-negotiated. Personally, I don’t see why the field of 3D printing should be any different. Some consumers may be willing to pay a lower price for a DRM-enabled 3D printer and others may prefer to pay more for an unencumbered printer. Let the market decide.

3D Printing and Product Liability

What I find more interesting than the intellectual property issues involved in 3D printing are the product liability issues. The industrial and then digital revolutions have all but killed the concept of “buyer beware.” For how can the average buyer understand how today’s consumer electronics work and all the ways they can fail or injure someone?

Instead, many states impose strict liability on manufacturers for defects based on the belief that “the costs of injuries resulting from defective products [should be] borne by the manufacturers that put such products on the market rather than by the injured persons who are powerless to protect themselves.” (Greenman v. Yuba Power Products, Inc., 377 P.2d 897 (1963), qtd. in Congressional Research Service, Products Liability: A Legal Overview, CRS Issue Brief, Jun. 3, 2005). Without strict liability for manufacturers, even if a consumer could prove an injury is due to a design or manufacturing defect, the injured consumer would likely only be able to recover from the retailer that sold them the defective product. But a major injury claim could bankrupt a retailer. So, rather than requiring the injured party to establish a chain of liability from the retailer to the distributor, supplier, and ultimately to the manufacturer, strict liability allows plaintiffs to directly sue the manufacturer of a product that caused injury.

When 3D print shops (one online directory lists over 600 such companies already in existence) become as commonplace as Kinko’s … sorry, FedEx Office, will they be considered the manufacturer? Imposing strict liability on them would likely kill 3D print shops.

Ponoko, a web-based 3D printing service that also allows designers to set up their own virtual CafePress for 3D objects) clearly disclaims any liability in its terms and conditions: “We accept no liability for any products displayed on the website, do not give any warranty nor make any representation about any products on the website, and are neither a seller nor agent of the designer, seller or buyer.” While that may seem to settle the matter, as someone in the audience at Public Knowledge’s event muttered, that might wel change the first time a child is injured by a 3D-printed item.

3D print shops like Ponoko allow designers to design and sell real physical products to real consumers without ever having seen the like Lulu’s print-on-demand for authors.” Presumably the designer would order and inspect at least one product themselves before making the product available to the public, but if they design a product that requires tolerances near the limits of what the 3D print shop’s equipment is capable of, some of the products may work just fine while others may fail catastrophically. Should the problem be considered a design defect for which the designer is responsible or a manufacturing defect for which the manufacturer (the 3D print shop) is responsible? Should both be responsible? This problem has likely already been dealt with in the traditional realm of product liability, but traditionally, designers of retail products were large companies. That means they had the resources to extensively test products before releasing them to market, they could purchase insurance and legal counsel, and they (hopefully) had the funds to pay for damages.

If a lone designer sells his or her wares through a Ponoko product gallery, placing liability on the designer for design, but not manufacturing, defects makes sense. But if the designer is a near-penniless “maker” artist, there’s no point in suing them. Personal injury attorneys tend to focus their sights on entities with deep pockets–and rightly so. And if a designer posts a design with a creative commons license to Thingiverse that is subsequently redesigned by a string of other designers (e.g. the planetary gearbox clock, or Stephen Colbert’s head), it may be very difficult indeed identifying which designer is responsible for the defect. While a “maker beware” doctrine may be sufficient for trinkets that people can print themselves (regardless of where they get the design), the widespread use of more complex and costly 3D-printed objects may be limited until there is some entity willing to provide a warranty.

Policy Implications

3D printing and scanning technology is about to change the world. Bre Pettis likened the Public Knowledge conference to the first meeting of the Homebrew Computer Club, which was seen as a watershed moment in the personal computer revolution. I share his excitement in attending the dawn of a new industry that could prove as important (possibly more important) as the Internet. But as the two examples above make clear, 3D printing raises serious policy questions.

Although the technologies involved are different, there are lots of similarities in the ways the public, media, and lawmakers respond to new technologies, whether the telegraph, Internet, or 3D printing. As Tom Standage has documented and Ithiel de Sola Pool predicted, “All of this has happened before, and it will all happen again.”

As we watch this new industry develop, the following questions will likely frame many debates:

  1. Which more important: The freedom to innovate or the need to hold someone responsible for any harms that occur?
  2. Where should liability lie when the exact cause of an injury cannot be determined?
  3. Should the law treat amateurs differently than professionals? If so, where should the line between them be drawn?
  4. What should the government’s role be in protecting intellectual property?
  5. Is the new technology a “natural monopoly” that must be regulated.

These questions are not unique to 3D printing. But unlike the Internet, which spent its first few years safely ensconced in the military industrial complex, away from malevolent users, malware, spam, and the court system, 3D printing is, by definition, based in the real world. Thus these questions will likely have to be answered sooner rather than later.

]]>
https://techliberation.com/2011/06/10/3d-printing-the-future-is-here/feed/ 649 37296
U.S. Coypright Czar Sounding Very Libertarian https://techliberation.com/2011/06/08/u-s-coypright-czar-sounding-very-libertarian/ https://techliberation.com/2011/06/08/u-s-coypright-czar-sounding-very-libertarian/#comments Wed, 08 Jun 2011 17:12:07 +0000 http://techliberation.com/?p=37221

The U.S. government doesn’t need to pick winners and losers and the last thing we should think about doing is messing up the Internet with inappropriate regulation.

Amen, sister! The above quote comes from Victoria Espinel, the U.S. Intellectual Property Enforcement Coordinator for the Office of Management and Budget (AKA the Copyright Czar), speaking at the World Copyright Summit in Brussels about how corporate innovation is often more effective than laws. She went on to explain that the cloud-based music services now offered by Apple, Amazon, and Google “may have the effect of reducing privacy by giving value to consumers …” Espinel is an Obama appointee, which calls into question the concerns voiced a year ago that the RIAA is taking over the Department of Justice.

The next stop on her speaking tour should be the Federal Communications Commission.

]]>
https://techliberation.com/2011/06/08/u-s-coypright-czar-sounding-very-libertarian/feed/ 2 37221
“Rogue Archivist” Carl Malamud On How to Fix Gov2.0 https://techliberation.com/2010/09/08/rogue-archivist-carl-malamud-on-how-to-fix-gov2-0/ https://techliberation.com/2010/09/08/rogue-archivist-carl-malamud-on-how-to-fix-gov2-0/#respond Wed, 08 Sep 2010 17:55:37 +0000 http://techliberation.com/?p=31725

At yesterday’s Gov2.0 Summit conference, “rogue archivistCarl Malamud gave a great speech about what’s wrong with government IT and what should be done about it.

“If our government is to do the jobs with which we have entrusted it, … the machinery of our government must first be made to work properly.”

Malamud describes a government IT landscape that is a “vast wasteland of contracts that lie fallow inside this beltway” because of agency capture by special interests and proposes three steps to fix government IT:

  • Finish the opengov revolution – create and enforce bulk data standards, release more government data using those standards, and update the Freedom of Information Act for the Internet age to require that any data released in response to a FOIA request is also posted online for anyone to access (others have already taken up this cause)
  • Create a National Scanning Initiative – Spend at least $250 million per year (a third of what the Smithsonian currently receives from the Federal government) for a decade to put all of the works housed at the Smithsonian, the National Archives, the Library of Congress, the National Library of Medicine, and the Government Printing Office online
  • Create a Computer Commission with authority to conduct agency-by-agency reviews and change projects from relying on over-designed custom systems to ones based on open-source building blocks and judicious use of commercial off-the-shelf components

O’Reilly’s Jim Stogdill believes that Malamud’s speech is an implicit recognition that Federal IT projects are just too big for the typical top-down IT development process and the better approach is “structuring incentives, policies, and ecosystems to encourage the complex to emerge from the simple.” This approach is basically the Unix philosophy, which is best summarized as “Design programs to do only a single thing, but to do it well, and to work together well with other programs.”

One big problem with most government software projects is that they’re developed without any thought of having those systems interact with other systems. As a result, data files are typically proprietary and importing and exporting data is impossible. But if federal IT projects were developed more in line with the Unix philosophy, as smaller, modular, interoperable systems, they would be more manageable and problems with a specific component would not jeopardize other systems.

And as Stogdill points out, there are only a few companies able to deal with the complexity of the Federal Aquisition Rules and the scale typical of most government projects. Breaking things into smaller components and open-sourcing the code developed on all new projects will enable many more companies to compete for these contracts.

The Obama administration is the first presidency to have a Chief Information Officer. I only hope he was listening to Malamud’s speech.

]]>
https://techliberation.com/2010/09/08/rogue-archivist-carl-malamud-on-how-to-fix-gov2-0/feed/ 0 31725
MPAA Ratings Are Better Than the Alternative https://techliberation.com/2010/08/20/mpaa-ratings-are-better-than-the-alternative/ https://techliberation.com/2010/08/20/mpaa-ratings-are-better-than-the-alternative/#comments Fri, 20 Aug 2010 14:08:29 +0000 http://techliberation.com/?p=31255

Back in March, the Motion Picture Association of America re-launched its film-rating website, filmratings.com. While this may be old news to some, I just learned about it from a post on BoingBoing which makes fun of the rationales given for the ratings, which are available on the new website. Example: The movie “3 Ninjas Knuckle Up” was “rated PG-13 for non-stop ninja action.”

It’s fine to joke about particular ratings, but we shouldn’t forget that the MPAA’s rating system was created to avoid government censorship, which was a real possibility after the 1915 U.S. Supreme Court case Mutual Film Corporation v. Industrial Commission of Ohio, which ruled that “the exhibition of moving pictures is a business, pure and simple, originated and conducted for profit … not to be regarded, nor intended to be regarded by the Ohio Constitution, we think, as part of the press of the country, or as organs of public opinion.” By a unanimous vote, the Supreme Court ruled that the First Amendment did not apply to motion pictures because “they may be used for evil.” (There was also an issue of whether the First Amendment applied to state actions, but because the state constitution at issue was substantially similar to the U.S. Constitution, that was not a factor in the opinion).

After a number of Hollywood scandals and public outcry over the immorality of Hollywood in the 1920s, the Motion Pictures Producers and Distributors Association (the precursor to the MPAA), adopted the Motion Pictures Production Code (known as the “Hays Code” after the first MPAA president) in 1930. The code required that “No picture shall be produced that will lower the moral standards of those who see it. Hence the sympathy of the audience should never be thrown to the side of crime, wrongdoing, evil or sin.”

]]>
https://techliberation.com/2010/08/20/mpaa-ratings-are-better-than-the-alternative/feed/ 5 31255
“Jailbreaking” Won’t Land You In Jail https://techliberation.com/2010/07/29/jailbreaking-wont-land-you-in-jail/ https://techliberation.com/2010/07/29/jailbreaking-wont-land-you-in-jail/#comments Thu, 29 Jul 2010 17:54:07 +0000 http://techliberation.com/?p=30751

jailbroken phone graphicThe Digital Millenium Copyright Act makes it a crime to circumvent digital rights management technologies but allows the Librarian of Congress to exempt certain classes of works from this prohibition.

The Copyright Office just released a new rulemaking on this issue in which it allows people to “unlock” their cell phones so they can be used on other networks and “jailbreak” closed mobile phone operating systems like the iOS operating system on Apple’s iPhones so that they will run unapproved third-party software.

This is arguably good news for consumers: Those willing to void their warranties so they can teach their phone some new tricks no longer have to fear having their phone confiscated, being sued, or being imprisoned. (The civil and criminal penalties are described in 17 USC 1203 and 17 USC 1204.) Although the new exemption does not protect those who distribute unlocking and/or jailbreaking software (which would be classified under 17 USC 1201(b), and thus outside the exemption of 17 USC 1201(a)), the cases discussed below could mean that jailbreaking phones simply falls outside of the scope of all of the DMCA’s anti-circumvention provisions.

Apple opposed this idea when it was initially proposed by the Electronic Frontier Foundation, arguing that legalizing jailbreaking constituted a forced restructuring of its business model that would result in “significant functional problems” for consumers that could include “security holes and malware, as well as possible physical damage.” But who beyond a small number of geeks brave enough to give up their warranties and risk bricking their devices, is really going to attempt jailbreaking? One survey found that only 10% of iPhone users have jailbroken their phones, and the majority are in China, where the iPhone was not available legally until recently. Is it really likely that giving the tinkering minority the legal right to void their product warranties would cause any harm to the non-tinkering majority that will likely choose to instead remain within a manufacturer’s “walled garden“? I don’t think so. If, as a result of this ruling, large numbers of consumers jailbreak their phones and install pirated software, the Copyright Office can easily reconsider the exemption in its next Triennial Rulemaking.

While the ruling is heartening, it is not surprising. In Chamberlain Group, Inc. v. Skylink Techs., Inc.,  the United States Court of Appeals for the Federal Circuit held that trafficking in a circumvention device violates Section 1201(a)(2) only if the circumvention enables access that “infringes or facilitates infringing a right protected by the Copyright Act.” The Chamberlain case involved unlicensed third-party garage door opener remotes. The Sixth Circuit came to a similar decision in Lexmark International, Inc. v. Static Control Components, Inc., a case involving a software “handshake” between Lexmark printers and Lexmark-branded toner cartridges meant to keep third-party replacement toner cartridges off the market. The Copyright Office’s ruling is just another example of policymakers recognizing that Copyright law exists only to protect copyrighted works, not business models based on excluding access.

But self-help is a two-way street: Companies are, and should be, free to continue using their own “self-help” technical protection measures to prevent (or merely discourage) customers from reverse-engineering their products. This highlights what Larry Lessig describes as the distinction between East Coast Code (laws) and West Coast Code (software). It makes perfect sense for companies to avail themselves of all possible methods (software and laws) to protect their revenue streams, but lawbreakers, by definition, don’t respect laws. Although most technical protection measures have been woefully inadequate to date (see, e.g., 1, 2, 3, 4, 5, to name a few), cryptographically-secure code is much more likely to be effective in the long-term than laws.

While this decision probably doesn’t matter much for the average, non-tinkering consumer, tinkerers will be comforted by the fact that their hobby is no longer a crime, and without the threat of criminal sanctions, there should be more publicization of what the new mobile phones are really capable of. That, in turn, should put additional pressure on phone manufacturers to take off the training wheels and be a bit more open about what apps they allow on their devices.

While Apple is correct in pointing out that some users with jailbroken phones still call Apple’s technical support lines, it is quite impossible to accidentally jailbreak your phone and all of the websites with instructions on how to do so have extensive disclaimers warning about the possible consequences. At some point, consumers should be responsible for their own actions. The Librarian of Congress is willing to give them that responsibility. And whether they want to or not, phone manufacturers will to.

]]>
https://techliberation.com/2010/07/29/jailbreaking-wont-land-you-in-jail/feed/ 3 30751
Imagining a world without the Internet https://techliberation.com/2010/01/11/imagining-a-world-without-the-internet/ https://techliberation.com/2010/01/11/imagining-a-world-without-the-internet/#respond Mon, 11 Jan 2010 21:17:56 +0000 http://techliberation.com/?p=24971

The fun folks at Cracked.com tried to imagine what the world would be like without the Internet. Think porn, piracy, and pranks . But beyond that, the remaining examples point out how ill-suited the offline world is for the types of interpersonal communications that take place online. Maybe that’s why, without the Internet, these Photoshop pundits predict we’ll all be reading newspapers and getting perfect scores on grammar tests.

]]>
https://techliberation.com/2010/01/11/imagining-a-world-without-the-internet/feed/ 0 24971
More on Fart Apps, and Soundboards Generally https://techliberation.com/2009/11/19/more-on-fart-apps-and-soundboards-generally/ https://techliberation.com/2009/11/19/more-on-fart-apps-and-soundboards-generally/#comments Thu, 19 Nov 2009 20:04:24 +0000 http://techliberation.com/?p=23640

My colleague (and boss) Adam Thierer had a great post last week about how “fart apps” are a great example of the generative nature of the mobile phone application marketplace. But Fart apps are just one type of “soundboard” application. A typical soundboard app has a bunch of buttons, and each time you press a button a sound is played. Most soundboards play catchphrases from popular movies and TV shows. According to AndroidZoom.com, there are 319 applications in the Android Market with “soundboard” in the title or description. Most (280) of them are free.

Almost all the free soundboards I tried include advertising from Google. The three main developers of soundboard apps for Android are Androidz , aspidoff, and Raz Corp. Androidz has ads from DoubleClick and aspidoff and Raz Corp (who’s apps seem exactly the same) both have ads from AdMob (which Google recently acquired). I’m all in favor of ad-supported content, but I suspect that the sound clips used in these soundboards are not licensed. It’s not a bad revenue model—if you don’t mind the fact that it’s illegal. You search the Internet for sound clips from a popular show or movie, generate a soundboard using an off-the-shelf or custom-made soundboard generator, post to the Android Market, and wait for the money to come rolling in. I don’t know how much money one can make off of the advertising from a free soundboard, but the most popular ad-supported soundboard, the You Kicked My Dog soundboard (based on a popular prank phone call), was once ranked the 213th most-popular download in the Android Market (according to AndroidStats.com). The most popular soundboard today seems to be the Mr. T soundboard from gman16k (which is free and doesn’t include any ads). It is currently ranked the 111th most popular Android Market download and according to the Android Market browser on my phone (the Samsung Moment), has been downloaded at least 50,000 times. And the Jeff Dunham soundboard from SoundBored, which costs $0.99, has been downloaded at least 1,000 times and is the 688th most popular app in the Entertainment category of the Android Market. The most popular fart app for Android? Noble Fart (which is free and ad-free). There may be some real money to be made there!

The most interesting aspect to all this is whether Google can be found liable for these presumed copyright infringements. Google bought AdMob specifically to incentivize developers to make applications for Android. Google provides the ad network, tools to help developers integrate ads into their apps, the marketplace application that users use to find and install those applications, and Google even wrote the operating system on which these applications run. The Android Market Developer Distribution Agreement of course requires that developers have all intellectual property rights for the products and materials they distribute through the Android Marketplace (clause 5.5), and disclaims any responsibility on the part of Google for the actions of developers (clauses 4.6 and 4.7). As my PFF colleague Tom Sydnor recently explained, to be found liable for inducing copyright infringement under Grokster, the defendant must be found to have intended and encouraged the product to be used to infringe. That maybe hard to prove, but there may be a better case for a “vicarious-liability claim, which focuses, instead, on broadly defined elements of ‘right-or-ability to control,’ and ‘direct financial benefit.” Google has total control on what apps are available through the Android Market and it believes there is a direct financial benefit in growing the number of applications available–that’s why it bought AdMob.

I don’t want Google to have to police its app store. That’s the whole point of Adam (Thierer)’s post: The fact that Google doesn’t control its app store with an iron fist is a good thing for innovation. I just worry that it may not be a good thing for Google if it means it’s found liable for copyright infringement.

]]>
https://techliberation.com/2009/11/19/more-on-fart-apps-and-soundboards-generally/feed/ 5 23640
Privacy Solutions Part 8: The Best Anonymizer Available: Tor, the TorButton & TorBrowser https://techliberation.com/2009/11/10/privacy-solutions-part-8-the-best-anonymizer-available-tor-the-torbutton-torbrowser/ https://techliberation.com/2009/11/10/privacy-solutions-part-8-the-best-anonymizer-available-tor-the-torbutton-torbrowser/#comments Tue, 10 Nov 2009 21:15:04 +0000 http://techliberation.com/?p=23299

By Eric Beach and Adam Marcus

In the previous entry in the Privacy Solutions Series, we described how privacy-sensitive users can use proxy servers to anonymize their web browsing experience, noting that one anonymizer stood out above all others: Tor, a sophisticated anonymizer system developed by the Tor Project, a 501(c)(3) U.S. non-profit venture supported by industry, privacy advocates and foundations, whose mission is to “allow you to protect your Internet traffic from analysis.” The Torbutton plug-in for Firefox makes it particularly easy to use Tor and has been downloaded over three million times. The TorBrowser Bundle is a pre-configured “portable” package of Tor and Firefox that can run off a USB flash drive and does not require anything to be installed on the computer on which it is used. Like most tools in the Privacy Solutions series, Tor has its downsides and isn’t for everyone. But it does offer a powerful tool to privacy-sensitive users in achieving a degree of privacy that no regulation could provide.

Why Use Tor?

The Tor Project identifies its users as parents, militaries, journalists, law enforcement offers, activists, whistleblowers, and others. But on a high level, Tor addresses essentially four problems:

(1) Outbound blocking of internet traffic by IP or domain name. Countries, businesses, and Internet service providers may block web-users from accessing certain IPs associated with domain names that are deemed inappropriate. For example, access to certain domain names from inside some United Stated Federal government computer networks is restricted, some companies block pornography and some governments may censor access to some websites.

(2) Blocking of Internet traffic based upon content analysis. Rather than simply relying on website blacklists, many countries use content-based filtering to prevent individuals from seeking out information deemed undesirable. For example, the Chinese government censors searches for “falun gong” through packet inspection and analysis.

(3) ISP traffic logging. With the increased use of deep packet inspection, some privacy-sensitive Internet users worry that Internet service providers may be capable of logging the online activity of millions of Americans, and providing that information to governments or other third parties (lawfully or otherwise).

(4) Government monitoring. With the United States government’s pervasive surveillance of the electronic activities of Americans, some citizens understandably desire to protect their First Amendment right to anonymously send and receive information-i.e., without the government being able to determine their identity.

How Tor Works

The general web data flow online looks something like this:

As we mentioned in our piece about anonymizers, a sophisticated anonymizer can obscure the identity of any one web user by pooling requests from large numbers of users across a “daisy chain” of proxy servers-thus effectively anonymizing the user’s identity, like so:

Tor works somewhat differently: Rather than simply trying to achieve “anonymity in a crowd” (of other web users using the network), Tor’s “client software” (e.g., TorButton) picks a random path through a network of other “Tor nodes” (users of Tor) for every request sent from the user’s computer. As the Tor Project explains:

Tor helps to reduce the risks of both simple and sophisticated traffic analysis by distributing your transactions over several places on the Internet, so no single point can link you to your destination. The idea is similar to using a twisty, hard-to-follow route in order to throw off somebody who is following you – and then periodically erasing your footprints. Instead of taking a direct route from source to destination, data packets on the Tor network take a random pathway through several relays that cover your tracks so no observer at any single point can tell where the data came from or where it’s going.

Tor thus achieves a high degree of anonymity, relying “not on the trustworthiness of individual servers but rather on the network design, which prevents a given router from knowing both the origin and the destination or even which other routers it would need to cooperate with to get that information.”

The following chart from the Tor Project’s more extensive explanation conveys the basics:

How to Install Tor

As mentioned above, Firefox users can install the TorButton plug-in, which will allow users to turn Tor on or off as desired.

The Tor Project also offers TorBrowser, an all-in-one bundle of the portable edition of Firefox (which can be carried along with all its settings on a USB stick or CD) pre-configured with the Tor plug-in. There is also a version of TorBrowser that includes the Pidgin instant messaging client, for those who also want to protect their instant messaging. Set-up takes less than three minutes and is just the thing for those trying to stay “one step ahead of The Man.” For more help on how to install the TorBrowser, click here or here.

Downsides/Risks of Tor

Speed. The biggest downside of using Tor is its slowness, which occurs for three reasons:

  1. Tor transports data among many intermediary nodes. Just as it takes considerably longer to drive from Los Angles to San Francisco if you travel though Phoenix, Dallas, and Denver, so it takes considerably longer to go from the end-user to the final destination if the data packets must transfer through four or five intermediaries.
  2. Tor encrypts the data between the intermediary nodes.
  3. Some intermediary nodes do not have high-bandwidth connections.

The following examples from an informal survey illustrate just how much Tor can slow down web browsing:

Domain

Time for Direct Access

Time for Tor Access

cnn.com

28.1 seconds

188 seconds

baidu.com

2.2 seconds

9.34 seconds

google.de

1.89 seconds

7.5 seconds

pff.org

15.87 seconds

74 seconds

Note: The results of the speed test depend heavily upon the specific Tor route used. Stopping Tor and then re-enabling it would likely produce a materially different result since the speed of the intermediary and exit-nodes would likely be different.

While Tor is slow, it can be improved mildly by changing a number of default configuration options. See here, here, here and here.

Increased Vulnerability. The second major downside is that the exit-node could record your data or perform a number of malicious attacks, as explained by Ars Technica and SecurityFocus.com. As the Berkman Center’s 2007 Circumvention Report noted, “Tor provides strong anonymity only if the user is careful to submit data to HTTPS protected servers.” If you plan to use Tor, you should consult the following Tor security warnings:

  • REMARK(S) ABOUT USING CONFIDENTIAL DATA ON (INSECURE) NON-HTTPS/SSL-CONNECTIONS: If you’re planning to visit password protected sites on non-encrypted connections, keep in mind that some exit-nodes record the passwords and possibly use them for abuse. Also all other transferred data is possibly recorded and misused.
  • REMARK(S) ABOUT ACCESSING ELECTRONIC BANKING AND OTHER SENSITIVE SITES VIA TOR: Most banks and similar institutions (PayPal for example) are using extended fraud countermeasures, like IP-origin plausibility checks and anonymous server blacklistings. Therefore you risk getting your bank account locked for security reasons by using the Tor-network.
  • REMARK(S) ABOUT (SECURE) HTTPS/SSL-CONNECTIONS TO FRAUD CRITICAL SITES: If you’re planning to visit fraud critical HTTPS/SSL-secured sites (Banks for example) and that specific site is querying you unexpectedly about accepting a new SSL-Certificate, be highly alert. Check the Certificate data or try another EXIT-node first. There are some rumors around, that some EXIT-nodes are trying to fake/highjack such HTTPS/SSL-connections.
]]>
https://techliberation.com/2009/11/10/privacy-solutions-part-8-the-best-anonymizer-available-tor-the-torbutton-torbrowser/feed/ 11 23299
Privacy Solutions Part 7: How Anonymizers Can Empower Privacy-Sensitive Users https://techliberation.com/2009/11/10/privacy-solutions-part-7-how-anonymizers-can-empower-privacy-sensitive-users/ https://techliberation.com/2009/11/10/privacy-solutions-part-7-how-anonymizers-can-empower-privacy-sensitive-users/#comments Tue, 10 Nov 2009 19:56:15 +0000 http://techliberation.com/?p=23296

By Eric Beach & Adam Marcus

Among Internet users, there are a variety of concerns about privacy, security and the ability to access content. Some of these concerns are quite serious, while others may be more debatable. Regardless, the goal of this ongoing series is to detail the tools available to users to implement their own subjective preferences. Anonymizers (such as Tor) allow privacy-sensitive users to protect themselves from the following potential privacy intrusions:

  1. Advertisers Profiling Users. Many online advertising networks build profiles of likely interests associated with a unique cookie ID and/or IP address. Whether this assembling of a “digital dossier” causes any harm to the user is debatable, but users concerned about such profiles can use an anonymizer to make it difficult to build such profiles, particularly by changing their IP address regularly.
  2. Compilation and Disclosure of Search Histories. Some privacy advocates such as EFF and CDT have expressed legitimate concern at the trend of governments subpoenaing records of the Internet activity of citizens. By causing thousands of users’ activity to be pooled together under a single IP address, anonymizers make it difficult for search engines and other websites–and, therefore, governments–to distinguish the web activities of individual users.
  3. Government Censorship. Some governments prevent their citizens from accessing certain websites by blocking requests to specific IP addresses. But an anonymizer located outside the censoring country can serve as an intermediary, enabling the end-user to circumvent censorship and access the restricted content.
  4. Reverse IP Hacking. Some Internet users may fear that the disclosure of their IP address to a website could increase their risk of being hacked. They can use an anonymizer as an intermediary between themselves and the website, thus preventing disclosure of their IP address to the website.
  5. Traffic Filtering. Some ISPs and access points allocate their Internet bandwidth depending on which websites users are accessing. For example, bandwidth for information from educational websites may be prioritized over Voice-over-IP bandwidth. Under certain circumstances, an anonymizer can obscure the final destination of the end-user’s request, thereby preventing network operators or other intermediaries from shaping traffic in this manner. (Note, though, that to prevent deep packet inspection, an anonymizer must also encrypt data).

How Anonymizers Work

A Simple Anonymizer

An anonymizer is an intermediary server between the end-user and the website that acts as a proxy for the user, effectively accessing websites on the end-user’s behalf, thereby hiding the end-user’s IP address (and perhaps other information).

Simple anonymizer diagram

A Real-World Analogy: Let’s say I want to order pizza from the local pizza shop, but I do not want them to have my phone number, which they could get from caller ID if I called them directly. Instead of calling them myself, I could call a friend and ask him to call them on my behalf, place my order, and then let me know how much it will cost and the estimated delivery time.

A Somewhat More Complicated Anonymizer Setup

A more sophisticated (and more realistic) anonymizer setup pools hundreds or even thousands of end-users through one or more anonymizing intermediaries. Consequently, web servers receive requests that originated from hundreds of end-users through a single IP address (that of the anonymizer). As a result, the web server is unable to distinguish and personally identify the IP addresses of any users.

Complicated anonymizer diagram

An Even More Complicated Anonymizer Setup

The above setup provides a layer of privacy beyond the traditional setup of direct end-user-to-website communication. But even so, if the anonymizer’s logs are compromised, so too is the privacy of the end-user -because it will likely be possible to associate specific requests with individual users.

A much greater degree of privacy protection is obtained by “daisy-chaining” together multiple anonymizers, but every additional hop slows down the browsing experience and leaves additional traces of the end-user’s traffic.

More complicated anonymizer diagram

How Do I Set Up an Anonymizer?

A variety of anonymizer services exist. Due to considerable variations in how each is installed, it is impossible to provide universal step-by-step details for installing one. But perhaps the two most trustworthy (free) options are Tor and Privoxy. While both services have experienced occasional vulnerabilities and hiccups, they are the best established among anonymizers. Other providers include: CGIProxy, AlchemyPoint, Nginx, SafeSquid, Squid, and yProxy. Since each anonymizer works differently and comes with its own set of pros, cons, and risks, it is extremely important to see whether a specific anonymizer meets your specific needs.

What Are the Downsides and Risks of an Anonymizer?

While an anonymizer offers considerable benefit to end-users concerned about the risks mentioned above, it is not a “silver bullet” or a “privacy panacea.” To start, an anonymizer is primarily a privacy tool, not a security tool (except insofar as sharing your IP address may increase your vulnerability to some cyber-attacks). In other words, an anonymizer does nothing to protect the integrity of your data as it is sent to and from the web server. Moreover, using an anonymizer may increase your vulnerability to a cross-site request forgery, cookie stealing, and, in particular, simple packet sniffing. Beyond the security risks to your data, anonymizers may increase a number of other potential privacy risks:

(1) Anonymizer Recordkeeping. If an anonymizing intermediary server is located within a country that requires ISPs and other service providers to keep records of traffic, your browsing habits are not invisible. The government or an authorized third party could subpoena or seize the history of your browsing activity from the anonymizer.

(2) Man in the Middle Attacks. By routing your traffic through intermediaries, you increase your exposure to man-in-the-middle attacks.

(3) Selling Browsing Records. An anonymizer could sell or provide unauthorized access to your browsing history. If you transmit sensitive unencrypted data through an anonymizer, you are taking a considerable security risk.

(4) Login-Based Records. Some web services such as Google’s Web History record a significant amount of end-user behavior based upon voluntary user login. In other words, when logged in to your Google account, your Google search behavior (among other things) will be personally identifiable by Google-even if you are using an anonymizer.

(5) TCP Only. When most end-users access the Internet, they utilize many different services (e.g., email, Internet, teleconferencing) and these differing services often require different network protocols. “Packet sniffer” tools such as WireShark will let you examine the protocols and packets sent and received by your computer. Because many anonymizers do not handle non-TCP traffic, they would not anonymize other online activities such as Voice-over-IP phone calls.

]]>
https://techliberation.com/2009/11/10/privacy-solutions-part-7-how-anonymizers-can-empower-privacy-sensitive-users/feed/ 15 23296
Against Browser Ballot Mandates: EC Now Designing Software? https://techliberation.com/2009/11/10/against-browser-ballot-mandates-ec-now-designing-software/ https://techliberation.com/2009/11/10/against-browser-ballot-mandates-ec-now-designing-software/#comments Tue, 10 Nov 2009 18:32:56 +0000 http://techliberation.com/?p=23284

The European Commission is now designing software. And that software is Microsoft Windows…

Comments of Adam Marcus & Berin Szoka to the European Commission on the Matter of Microsoft’s Browser Ballot Proposal, COMP/C-3/39.530 — Microsoft (Tying)*

Submitted Nov. 9, 2009 [PDF of filing]

We applaud the Commission for not repeating its earlier approach to concerns about tie-ins to Microsoft Windows by ordering Microsoft to cripple the functionality of its operating system— such as occurred with the Windows Media Player.  While a “browser ballot” is certainly a less restrictive approach, we remain unconvinced that mandating such a ballot is necessary in this case, and concerned about the precedent that government intervention may set here for the future of the highly dynamic and innovative software sector.  If, however, a ballot is to be required, we encourage the Commission to accept Microsoft’s ballot as proposed.

A Browser Ballot Mandate Is Not Necessary

The European Community’s Discussion Paper on exclusionary abuses recognizes that bundled discounts infringe Article 82 only when the discount is so large that “efficient competitors offering only some but not all of the components, cannot compete against the discounted bundle.”[1] In this case, a number of alternative browser producers have successfully competed against Internet Explorer in the past—despite it being bundled with Microsoft’s Windows operating system.

Consumers in Europe and elsewhere have more browser choices today than ever before—as indicated by the fact that Microsoft’s proposed browser ballot would include provide users a choice among twelve different Web browsers.[2] While Microsoft’s Internet Explorer remains the most popular browser in Europe, its market share is slipping rapidly.[3] Indeed, in at least two European countries (Hungary and Slovakia), Firefox appears to have supplanted Internet Explorer as the leading Web browser.[4] No matter which statistics are used, Internet Explorer’s share of the Web browser marketplace has been steadily declining since at least 2001.[5]

The fact that Internet Explorer still holds a majority market share is not, ipso facto, evidence that Microsoft is abusing its position in the operating system market.  Microsoft does not prevent or hinder users from downloading, installing, and using other Web browsers on their computer if they choose to.  While Firefox may well be the popular choice of the digerati, many users may prefer Internet Explorer because of its greater simplicity (perceived or otherwise).  For example, some users may not want to tinker with plug-ins, and may even be turned off by the windows that regularly appear upon startup of Firefox asking the user to update the browser or plug-ins.[6]

Search engines make finding an alternative to Internet Explorer incredibly easy.  Indeed, the top five Google search results for the word “browser” include links to the four most popular alternatives to Internet Explorer (Mozilla Firefox, Opera, Google Chrome, and Apple Safari).  The fifth link is to a Wikipedia page explaining what a Web browser is.  Internet Explorer appears only on page two of Google search results.

There are many ways of promoting alternative browsers.  Any browser developer can use search engine marketing (paid search ads) or search engine optimization (strategies to cause their websites appear higher in search results for relevant keywords).  Just as importantly, Google and Apple both have the opportunity to promote their browsers in their wildly popular consumer products.  Google has included links to its Chrome browser from its search engine, while Apple has bundled its Safari browser with iTunes and QuickTime, such that users who update the latter are encouraged to download the former.  These are all legitimate ways to promote browsers and indicate that the operating system is not the only point of access to consumer’s attention regarding browser choices.

Mandating a Browser Ballot Sets a Dangerous Precedent of Government intervention

The European Commission believes that “PC users should have an effective and unbiased choice between Internet Explorer and competing web browsers.”[7] But users already have a choice and, as explained above, many are exercising it.  Furthermore, nearly all Web browsers for the PC are available for free and users don’t have to choose just one—they can install, and use simultaneously, as many as they want.  The current controversy is no different than previous controversies over whether, for example, buyers of new automobiles should be allowed to purchase an automobile without the factory-installed radio or tires.[8]

There is little question that a Web browser is a required application for any Internet-connected computer.  It is thus not surprising that Microsoft would bundle a Web browser with its operating system—and why wouldn’t Microsoft bundle its own Web browser?  The user can, of course, use that browser to easily find and download another browser.

Should other manufacturers that pre-install their own browsers in their products be required to offer a similar browser ballot? Apple bundles its own Safari browser in its desktop and iPhone operating systems.  Research In Motion bundles its own browser in its Blackberry mobile phones. Even most distributions of Linux include a bundled Web browser.  The CTO of Opera, the company that initiated the current controversy, thinks it would be a “good idea” for other operating systems to include a browser ballot.[9] Where will this “Browser Neutrality” thinking end?[10]

Such mandates could easily extend to require ballots for choosing one’s default search engine, media player, instant messaging client, email provider, and so on.  While a ballot may indeed be a reasonable way for a company to offer meaningful choice and allay legitimate concern about any “market power” it might be alleged to possess, government should tread cautiously in such matters, and avoid injecting political decision-making into the software design process.  The threat of regulation already appears to be “chilling” Microsoft’s design decisions.  Most notably, the company excluded a number of applications from Windows 7: Outlook Express, Windows Mail, Windows Calendar, Windows Address Book, Windows Messenger, Windows Movie Maker, and Windows Photo Gallery.[11] Windows Movie Maker had been included in every version of Windows since Windows Me was released in 2000.[12] “Regime uncertainty” about how antitrust regulators might view the bundling of such applications or what kind of “choice mechanism” might be mandated simply does not benefit consumers if it discourages companies like Microsoft from including useful tools in its software—or encourages them to cripple the functionality of those tools, if included, such as making Internet Explorer harder to access.

Microsoft’s Proposed Browser Ballot

Microsoft’s proposal suggests a number of technical issues that may result in confusion for users and additional work for network administrators:

  • Microsoft plans to roll out the update as an “Important” or “High Priority” update,[13] which will mean that the update will be installed automatically and the ballot screen will appear without warning.  This may confuse unsophisticated users who may believe the new pop-up window is attempting to install a virus.
  • The fact that the browser ballot update removes the Internet Explorer icon from the Windows taskbar means that if the user does not select Internet Explorer (which will presumably restore the Internet Explorer icon) when the ballot first appears, they may be left not knowing how to access Internet Explorer.
  • Without an easy way for network administrators to prevent the browser ballot from appearing in enterprise environments, the technical support burden the browser ballot will be shifted to network administrators who will have to explain to their users how to respond to the ballot—which could particularly burden small enterprises with limited administrative resources.  Microsoft will also need to be careful to ensure that in environments where users are prevented from installing additional software, the browser ballot does not subvert that policy.
  • If the standard user account control (UAC) warnings are bypassed, as Opera has suggested, this could open a security hole that could then be exploited by malicious software.[14]

While such questions should make the commission think very carefully about the necessity of requiring a browser ballot at all, the Commission should, at the very least, leave such technical matters to the experts at Microsoft so long as the company fairly presents the choices of browsers available to consumers, as it has done in its proposal.  While it might be possible to somewhat increase the “fairness” of the ballot by, for example, randomizing the order in which browser choices appear, Microsoft’s proposal presents consumers multiple options in a manner that is fair enough.  Recognizing the value in consistency of user experience and the fact that it is Microsoft that will have to deal with the technical support burden (and negative reputational effects) the browser ballot is likely to cause, the Commission should defer to Microsoft’s design choices and avoid descending down the slippery slope of micromanaging user interface design.  Annoying menus and pop-ups were widely blamed for the unpopularity of Windows Vista and did real harm to Microsoft’s reputation for usability— which the company is now working hard to overcome with Windows 7.  Simply put, “Too many cooks spoil the stew.”

Conclusion

Properly understood, “Antitrust law protects competition, not competitors.”[15] With so many browser choices and evidence that consumers are fully capable of finding new browsers on their own, it remains unclear that any browser ballot need be mandated to “ensur[e] genuine consumer choice.”[16] But if such a ballot is in fact necessary, Microsoft’s proposal should be approved by the Commission.


* Adam Marcus is Research Fellow and Senior Technologist at The Progress & Freedom Foundation.  Berin Szoka is Director of the Center for Digital Media Freedom at The Progress & Freedom Foundation.  The views expressed in these comments are their own, and are not necessarily the views of the PFF board, fellows or staff.

[1] Directorate-General for Competition, Eur. Comm’n, Discussion Paper on the Application of Article 82 of the Treaty to Exclusionary Abuses (Dec. 2005) ¶ 189, available at http://ec.europa.eu/comm/competition/antitrust/art82/discpaper2005.pdf. The Discussion Paper is a consultation document, prepared by the staff of the DG Competition. It has not been published at the Official Journal of the European Communities and therefore does not produce any legal effect.

[2] Neelie Kroes, European Commissioner for Competition Policy, “Power transformers cartel busted; Microsoft web browsers case,” Opening remarks at press conference, Brussels, Oct. 7, 2009, http://europa.eu/rapid/pressReleasesAction.do?reference=SPEECH/09/447&format=HTML&aged=0&language=EN&guiLanguage=en.

[3] StatCounter Global Stats, Top 5 Browsers in Europe from Jul 08 to Nov 09, http://gs.statcounter.com/#browser-eu-monthly-200807-200911.

[4] AT Internet Institute, “Internet Explorer seriously shaken up by rival browsers in Europe,” Nov. 2, 2009, http://www.xitimonitor.com/en-us/browsers-barometer/browser-barometer-september-2009/index-1-2-3-180.html?xtor=11.

[5] For a number of statistics, see Wikipedia, Usage share of web browsers, http://en.wikipedia.org/wiki/Usage_share_of_web_browsers (last accessed Nov. 8, 2009).

[6] See, e.g., Adam Thierer, “Another Problem for the Zittrain Thesis—Old People!,” Technology Liberation Front, Apr. 12, 2008, http://techliberation.com/2008/04/12/another-problem-for-the-zittrain-thesis-old-people/.

[7] Neelie Kroes, European Commissioner for Competition Policy, “Power transformers cartel busted; Microsoft web browsers case,” Opening remarks at press conference, Brussels, Oct. 7, 2009, http://europa.eu/rapid/pressReleasesAction.do?reference=SPEECH/09/447&format=HTML&aged=0&language=EN&guiLanguage=en.

[8] See, e.g, Automatic Radio Mfg. Co. v. Ford Motor Co., 272 F.Supp. 744 (D. Mass, 1967), aff’d, 390 F.2d 113 (1st Cir. 1968).

[9] NetworkWorld, “EC decision expected to force IE to better support standards,” July 24, 2009, http://www.networkworld.com/community/node/43851 (“Q: In your opinion, should Apple also be expected to offer a ballot box for its competitors? Should Ubuntu? A: … it may be a good idea.”).

[10] Berin Szoka & Adam Thierer, The Progress & Freedom Foundation, “Net Neutrality, Slippery Slopes & High-Tech Mutually Assured Destruction,” Progress Snapshot No. 5.11, Oct. 2009, http://www.pff.org/issues-pubs/ps/2009/ps5.11-net-neutrality-MAD-policy.html.

[11] Microsoft, “Finding your applications in Windows 7,” http://download.live.com/windows7 (last accessed Nov. 8, 2009). See also Brad Linder, “What’s not in Windows 7? Windows Movie Maker, Windows Mail, etc,” DownloadSquad, Nov. 3, 2008, http://www.downloadsquad.com/2008/11/03/whats-not-in-windows-7-windows-movie-maker-windows-mail-etc.

[12] Press Release, Microsoft, Microsoft Announces Immediate Availability Of Windows Millennium Edition (Windows Me), Sept. 14, 2000, http://www.microsoft.com/Presspass/press/2000/sept00/availabilitypr.mspx; PapaJohn, Windows Movie Maker in Windows 7, Bright Hub, Oct. 29, 2009, http://www.brighthub.com/multimedia/video/articles/22658.aspx.

[13] Microsoft, Proposed Commitment ¶ 9, July 24, 2009, http://www.microsoft.com/presspass/presskits/eu-msft/docs/07-24-09Commitment.doc.

[14] Gregg Keizer, Report: Browser makers contest Microsoft browser ballot deal, SFGate, Nov. 5, 2009, http://www.sfgate.com/cgi-bin/article.cgi?f=/g/a/2009/11/05/urnidgns852573C40069388000257665005C1B49.DTL.

[15] Thomas Barnett, head of the Department of Justice’s Antitrust division, “Interoperability Between Antitrust and Intellectual Property,” Presentation to the George Mason University School of Law Symposium, Managing Antitrust Issues in a Global Marketplace, Washington, DC, Sept. 13, 2006, available at http://www.justice.gov/atr/public/speeches/218316.htm, citing Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., 509 U.S. 209, 224 (1993) (“It is axiomatic that the antitrust laws were passed for ‘the protection of competition, not competitors.’“ (quoting Brown Shoe Co. v. United States, 370 U.S. 294, 320 (1962))).

[16] Press Release, European Commission, Antitrust: Commission welcomes new Microsoft proposals on Microsoft Internet Explorer and Interoperability, MEMO/09/352, http://europa.eu/rapid/pressReleasesAction.do?reference= MEMO/09/352&format=HTML&aged=0&language=EN&guiLanguage=en.

http://d1.scribdassets.com/ScribdViewer.swf?document_id=22373692&access_key=key-2fring526bslfxsykd8s&page=1&version=1&viewMode=list

]]>
https://techliberation.com/2009/11/10/against-browser-ballot-mandates-ec-now-designing-software/feed/ 8 23284
Announcing PFF’s Taxonomy of Online Security & Privacy Threats https://techliberation.com/2009/10/30/announcing-pffs-taxonomy-of-online-security-privacy-threats/ https://techliberation.com/2009/10/30/announcing-pffs-taxonomy-of-online-security-privacy-threats/#comments Fri, 30 Oct 2009 17:51:34 +0000 http://techliberation.com/?p=23131

PFF summer fellow Eric Beach and I have been working on what we hope is a comprehensive taxonomy of all the threats to online security and privacy. In our continuing Privacy Solutions Series, we have discussed and will continue to discuss specific threats in more detail and offer tools and methods you can use to protect yourself.

The taxonomy is located here.

The taxonomy of 21 different threats is organized as a table that indicates the “threat vector” and goal(s) of attackers using each threat. Following the table is a glossary defining each threat and providing links to more information.Threats can come from websites, intermediaries such as an ISP, or from users themselves (e.g. using an easy-to-guess password). The goals range from simply monitoring which (or what type of) websites you access to executing malicious code on your computer.

Please share any comments, criticisms, or suggestions as to other threats or self-help privacy/security management tools that should be added by posting a comment below.

]]>
https://techliberation.com/2009/10/30/announcing-pffs-taxonomy-of-online-security-privacy-threats/feed/ 4 23131
The Quid Pro Quo In Practice https://techliberation.com/2009/09/09/the-quid-pro-quo-in-practice/ https://techliberation.com/2009/09/09/the-quid-pro-quo-in-practice/#comments Wed, 09 Sep 2009 19:13:09 +0000 http://techliberation.com/?p=21188

My PFF colleagues Berin Szoka and Adam Thierer have written many times about the quid pro quo by which advertising supports free online content and services: somebody must pay for all the supposedly “free” content on the Internet. There is no free lunch !

Here are two two recent examples I came across of the quid pro quo being made very apparent to users.

Hulu error message

Hulu. Traditionally, broadcast media has been a “two-sided” market: Broadcasters give away content to attract audiences, and broadcasters “sell” that audience to advertisers. The same is true for Internet video. But watching Hulu over the weekend, I noticed something interesting: Adblock Plus blocked the occasional Hulu ad but every time it did so, I was treated to 30 seconds of a black screen (instead of the normal 15 second ad) showing a message from Hulu reminding me that “Hulu’s advertising partners allow [them] to provide a free viewing experience” and suggesting that I “Confirm all ad-blocking software has been fully disabled.”

Although I use AdBlock on many newspaper websites (because I just can’t focus on the articles with flashing ads next to the text), I would much rather watch a 15-second ad than wait 30 seconds for my show to resume. I think most users would feel the same way. We get annoyed by TV ads because they take up so much of our time. If Wikipedia is to be believed, there’s now an average of 9 minutes of advertisements per half-hour of television. That’s double the amount of advertising that was shown in the 1960s.

But online services such as Hulu show an average of just 37 seconds of advertising per episode. Amazingly, some shows garner ad rates 2-3 times higher than on prime-time television. Why might ad rates for online shows be higher? Because:

  1. When a show has only 15 seconds of ads, you’re less likely to turn away from the screen to do something else;
  2. Advertisers are more certain that viewers are watching their ads (as opposed to changing the channel or skipping over it with a DVR); and
  3. Online viewers are twice as likely to remember a commercial they’ve seen on Hulu as one they’ve seen on television-at least in part because of factors 1 and 2, and perhaps because Internet video ads might be more effective in other ways.

As for me, I’ve reconfigured Adblock Plus to not block ads on Hulu. But even if users like me don’t block video ads on sites like Hulu, they may not be able to generate enough revenue to survive. Traditional media providers might be willing to cross-subsidize experiments in online video distribution for a while from offline revenue streams, but at some point, either online video will have to produce comparable revenue or the quality of content will deteriorate notably in the gradual shift to online distribution.

The problem is that, even if online video services can sell ad time for 3 times as much as broadcasters, because there is almost 15 times as much ad time on broadcast television than online services, the online service will still earn only 1/5 as much revenue as a traditional broadcaster. This is why online video is expected to drive adoption of personalized (or “behaviorally targeted”) advertising: If online video programmers can target advertising to the individual user’s likely interest, rather than to a crude profile of their likely audience, they can generate much higher revenue per ad because advertisers won’t be wasting their ad budgets showing users ads for things they aren’t interested in! The increased revenue for online content providers made possible by targeted advertising is the “mother’s milk” that many websites need to survive.

Google Maps. On 8/25, Google announced that it had updated Google Maps for mobile to periodically report the user’s location (based on the GPS chip in their device) back to Google. But before you reach for your tinfoil hats and start shouting about conspiracy theories, let me explain why this “tracking” is actually fantastic news for users:

  1. Google uses the reported location (and speed) information to assess traffic conditions in real-time. This traffic information is then shared with other Google Maps users in near-real-time-everyone benefits! If only a few people participated, the data would not be very helpful. But when lots of people participate, the data is more accurate and available for more roads than would otherwise be possible.
  2. It’s completely optional and users are fully informed of what the software is doing.
  3. People who do not want their location tracked can opt-out at no cost-and they get to keep using Google Maps for free.

Conclusion

In the Hulu example, the basic quid pro quo for getting all that free video programming is watching a few ads. It’s possible for people to block the ads, but then they’ll waste even more time looking at a black screen. That basic quid pro quo might prove insufficient to support the quality and quantity of video programming users want online, but without at least the basic quid pro quo of not blocking ads, video programming won’t get past stage one.

In the Google Maps example, the quid pro quo for getting traffic data is sharing your location with Google. Users can still get the traffic data without sharing their location, but if everyone did that, there would be no traffic data. This highlights the problem of free-riding created by the no-cost opt-out: It’s still possible to be a freeloader with both services, but if everyone did that, these services simply wouldn’t survive.

]]>
https://techliberation.com/2009/09/09/the-quid-pro-quo-in-practice/feed/ 11 21188
Privacy Solutions: Overview, Encryption & Anonymization https://techliberation.com/2009/08/06/privacy-solutions-overview-encryption-anonymization/ https://techliberation.com/2009/08/06/privacy-solutions-overview-encryption-anonymization/#comments Thu, 06 Aug 2009 19:45:35 +0000 http://techliberation.com/?p=19990

By Eric Beach, Adam Marcus & Berin Szoka

In the first entry of the Privacy Solution Series, Berin Szoka and Adam Thierer noted that the goal of the series is “to detail the many ‘technologies of evasion’ (i.e., empowerment or user ‘self-help’ tools) that allow web surfers to better protect their privacy online.” Before outlining a few more such tools, we wanted to step back and provide a brief overview of the need for, goals of, and future scope of this series.

Smokey the Bear with signWe started this series because, to paraphrase Smokey the Bear, “Only you can protect your privacy online!” While the law can play a vital role in giving full effect to the Fourth Amendment’s restraint on government surveillance, privacy is not something that cannot simply be created or enforced by regulation because, as Cato scholar Jim Harper explains, privacy is “the subjective condition that people experience when they have power to control information about themselves.” Thus, when the appropriate technological tools and methods exist and users “exercise that power consistent with their interests and values, government regulation in the name of privacy is based only on politicians’ and bureaucrats’ guesses about what ‘privacy’ should look like.” As Berin has put it:

Debates about online privacy often seem to assume relatively homogeneous privacy preferences among Internet users. But the reality is that users vary widely, with many people demonstrating that they just don’t care who sees what they do, post or say online. Attitudes vary from application to application, of course, but that’s precisely the point: While many reflexively talk about the ‘importance of privacy’ as if a monolith of users held a single opinion, no clear consensus exists for all users, all applications and all situations.

Moreover, privacy and security are both dynamic: The ongoing evolution of the Internet, shifting expectations about online interaction, and the constant revelations of new security vulnerabilities all make it impossible to simply freeze the Internet in place. Instead, users must be actively engaged in the ongoing process of protecting their privacy and security online according to their own preferences.

Our goal is to educate users about the tools that make this task easier. Together, user education and empowerment form a powerful alternative to regulation. That alternative is “less restrictive” because regulatory mandates come with unintended consequences and can never reflect the preferences of all users.

Many forthcoming Privacy Solution Series entries will describe tools that fit into two broad categories:

  • Encryption (protecting communications): The scrambling of content to protect against unauthorized viewing.
  • Anonymization (protecting identity): Paradoxically, the Internet offers an unprecedented degree of both anonymity and transparency/track-ability. While most behavior online does leave a plethora of tracks in the form of ISP records, server logs, and cookie IDs, users can achieve a significantly greater degree of privacy online by blocking data collection mechanisms like cookies or routing traffic through a non-monitored server.

For some, one category is more important than the other. For example, some believe that public message boards are more civil when users are prohibited from posting anonymously and posts are signed with the user’s real name instead of a made-up “handle.” But these same people may feel very strongly that the content of emails should be protected ( i.e., encrypted) so that only the intended recipient can view them.

In other situations and/or for other people, the exact opposite may be true. A user might not care that Gmail scans their email to provide targeted advertising as long as Google does not associate that information with their actual identity.

Regulatory solutions inevitably fail to recognize such complexity and even inconsistency of user preferences. By contrast, user empowerment offers diverse solutions for a diverse citizenry.

Additional information about encryption, anonymity & other technologies of evasion

  • Bruce Schneier’s Applied Cryptography (available online in part in an older version online), is considered one of the definitive works about encryption for the layman.
  • Access Denied: the Practice and Policy of Global Internet Filtering, published in 2008 by Harvard’s Berkman Center, discusses encryption and technologies of evasion, while also describing current filtering and censoring efforts in many countries. You can view much of the book at the OpenNet initiative or preview the book at Google Books. Berkman’s 2007 Circumvention Landscape Report outlines technologies of censorship and technologies of evasion in an applied context.
  • The Electronic Frontier Foundation offers an excellent introduction to the basics of encryption as part of its Surveillance Self-Defense Project.
  • The Handbook for Bloggers and Cyberdissidents published by Reporters Without Borders, which details techniques for circumventing censorship.
]]>
https://techliberation.com/2009/08/06/privacy-solutions-overview-encryption-anonymization/feed/ 21 19990
Privacy Solutions (Part 4): Firefox Privacy Features https://techliberation.com/2009/03/16/privacy-solutions-part-4-firefox-privacy-features/ https://techliberation.com/2009/03/16/privacy-solutions-part-4-firefox-privacy-features/#comments Mon, 16 Mar 2009 16:29:29 +0000 http://techliberation.com/?p=17401

Firefox logoAs noted in the first installment of our “Privacy Solution Series,” we are outlining various user-empowerment or user “self-help” tools that allow Internet users to better protect their privacy online-and especially to defeat tracking for online behavioral advertising purposes. These tools and methods form an important part of a layered approach that we believe offers an effective alternative to government-mandated regulation of online privacy.

In the last installment, we covered the privacy features embedded in Microsoft’s Internet Explorer (IE) 8. This installment explores the privacy features in the Mozilla Foundation’s Firefox 3, both the current 3.0.7 version and the second beta for the next release, 3.5 (NOTE – The name for the next version of Firefox was just changed from 3.1 to 3.5 to reflect the large number of changes, but the beta is still named 3.1 Beta 2). We’ll make it clear which features are new to 3.1/3.5 and those which are shared with 3.0.7. Future installments will cover Google’s Chrome 1.0, Apple’s Safari 4, and some of the more useful privacy plug-ins for browsers . The availability and popularity of privacy plug-ins for Firefox such as AdBlock (which we discussed here), NoScript and Tor significantly augments the privacy management capabilities of Firefox beyond the capability currently baked into the browser.  In evaluating the Web browsers, we examine:

(1) cookie management; (2) private browsing; and (3) other privacy features

History of Firefox

Firefox descends from the very first graphical web browser, NCSA Mosaic. Mosaic was developed at the National Center for Supercomputing Applications in 1992. The co-author of Mosaic, Marc Andreessen, co-founded Netscape Communications and was the lead developer of Netscape Navigator, which was first released in 1994 and based in part on NCSA Mosaic code. In 1998, Netscape publicly released the source code for the latest version of its browser and created the Mozilla Organization to coordinate its development. AOL acquired Netscape Communications later that year, and when AOL scaled back its involvement with the Mozilla Organization in 2003, the Mozilla Foundation was launched to ensure the browser could survive without Netscape or AOL. The Mozilla Foundation released Firefox 1.0 on November 9, 2004. According to Net Applications, Firefox is currently the second-most popular Web browser after Internet Explorer, with 21.72% of the market in Q1 2009.

Cookie Management

To access Firefox’s basic cookie management and privacy settings, open the “Tools” menu, click “Options,” and then click on the “Privacy” tab to display the following options:

Options dialog box

Instead of using a slider, as Internet Explorer does, Firefox gives more direct control over cookies. Users can choose to refuse all cookies, refuse all third-party cookies (see the previous post in this series for an explanation of the difference between first-party cookies and third-party cookies), and/or control when cookies expire. The “keep until” box gives three options:

(1) ” they expire” – Cookies determine their own expiration date.

(2) ” I close Firefox” – Cookies are deleted when you close the browser.

(3) ” ask me every time” – Every time a cookie is sent to the user’s computer, the user is asked if they want to “Allow” the cookie (accept it and let the cookie determine its own expiration date), “Allow for Session” (equivalent to the “I close Firefox” setting), or “Deny.” Firefox can also optionally save the user’s preference for all future cookies received from that website. The “Show Details” button allows true power users to view the contents of each cookie before making a decision, as seen here:

Confirm setting cookie dialog box

By clicking the “Show Cookies” button in the Privacy tab of the Options dialog box, users can view all of the cookies already saved on their computer and delete individual cookies or all cookies at once.

Cookies dialog box

Finally, by clicking the “Exceptions” button in the Privacy tab of the Options dialog box, users can specify which websites are always or never allowed to set cookies.

Exceptions dialog box

In addition to having the option of deleting all cookies whenever the browser is closed, users can clear other types of private data when the browser is closed. The following dialog box is displayed when a user clicks on the “Settings” button in the Privacy tab of the Options dialog box.

Clear Private Data dialog box

Private Browsing

Private Browsing iconSimilar to Internet Explorer 8’s “InPrivate Browsing” feature (see the previous post in this series for more information) and Chrome’s Incognito, Firefox 3.5 will include a new “Private Browsing Mode” that protects so-called “over the shoulder” privacy. To enable Private Browsing Mode, select “Private Browsing” from the Tools menu. To disable Private Browsing Mode and reload all tabs that appeared when you enabled Private Browsing Mode, just uncheck the same “Private Browsing” menu item in the Tools menu. There is a hidden way to make Firefox 3.1 Beta 2 always start in Private Browsing Mode and a plan to possibly provide an easier way to do this in the final 3.5 release, but the only obvious use for this would be on public computers (e.g., at a library or coffee shop) where it can’t be guaranteed that each user will close the browser before leaving.

Other Privacy Features

  • Master Password – As more and more can be done online and more and more sites require user accounts (and passwords), having all those passwords stored in your web browser can be a security problem unto itself. Firefox allows you to view saved passwords, but it also allows you to protect all of your site-specific saved passwords with a single master password. Your saved passwords cannot be used to automatically log into websites and other individuals with access to your computer cannot view your saved passwords unless the master password is entered. Firefox also has a password quality meter to show you how secure your master password is from cracking attempts.
  • Instant Web Site ID – For all websites with an Extended Validation SSL Certificate, this feature displays the website owner’s name to the left of the URL in the address bar. Clicking on the “favicon” on the left side of the address bar displays additional information about the certificate (whether an Extended Validation Certificate or regular SSL certificate) and whether the connection is SSL-encrypted. A second click displays the Page Info dialog box which reports whether you’ve previously visited the website and how many times, whether the website is storing cookies on your computer (which you can view with another click), and if there are saved passwords for the website on your computer (which you can also view with another click). From the Page Info dialog box you can also view all of the media embedded in the webpage, all of the meta tags in the HTML source code for the page, any RSS feeds on the page, and the permissions in effect for the page.
  • Optional automatic phishing and malware protection – Two options in the “Security” tab of the Options dialog box, “Tell me if the site I’m visiting is a suspected attack site” and “Tell me if the site I’m visiting is a suspected forgery,” allow Firefox to automatically protect users from malware (attack sites) and phishing scams (forgery sites). When either of these options is enabled, Firefox automatically checks the URL of the page you’re visiting against a list of reported phishing and/or malware sites that it downloads in the background every 30 minutes. If you navigate to a page on one of these lists, Firefox will double-check that the URL is on the list by sending a cookie to google.com, who maintains the lists of identified malware and phishing sites used by Firefox. The anti-phishing site aspect of this feature is equivalent to Internet Explorer’s SmartScreen Filter.

Conclusion

In terms of privacy, what makes Firefox unique compared to the other popular browsers is the extensive number of add-ons (also called “plug-ins” or “extensions”) designed to protect users’ privacy. Google’s Chrome browser does not currently support third-party add-ons but plans to do so in an upcoming release. Microsoft’s Internet Explorer does support extensions, and Microsoft has a website devoted to cataloging those extensions, but offers nothing like the variety and complexity of the add-ons available for Firefox. The two most popular Firefox add-ons (in terms of total downloads; currently second and fourth most popular in terms of weekly downloads) are specifically related to privacy. Adblock Plus (ABP) uses dynamically-updated “subscriptions” to maintain a list of unwanted third-party content and automatically  block that content from being displayed or run by Firefox. ABP can block Flash code, images, external scripts, stylesheets, frames, tracking cookies, webbugs, html elements, text ads, backgrounds, and any class, id, and any other HTML or CSS tag. By default, ABP allows all such elements unless they are blocked by a filter.  NoScript, by contrast, blocks all Java, JavaScript, Flash, and other plugins unless you explicitly allow them on a particular website  either (i) temporarily for your current session (until you close the browser); (ii) or permanently for all future sessions. Thus, with these two add-ons, Firefox offers security-conscious users a much more secure (and thus private) browsing environment than currently available in other browsers. We already covered Adblock Plus in a previous installment of our Privacy Solutions Series. We plan to cover NoScript and other popular Firefox add-ons such as TorButton and FoxyProxy in future installments.

Additional Reading / Links

]]>
https://techliberation.com/2009/03/16/privacy-solutions-part-4-firefox-privacy-features/feed/ 631 17401
Privacy Solutions (Part 3): Internet Explorer Privacy Features https://techliberation.com/2009/03/06/privacy-solutions-series-part-3-internet-explorer-privacy-features/ https://techliberation.com/2009/03/06/privacy-solutions-series-part-3-internet-explorer-privacy-features/#comments Fri, 06 Mar 2009 14:50:26 +0000 http://techliberation.com/?p=12538

By Adam Thierer, Berin Szoka, & Adam Marcus

IE logoAs noted in the first installment of our “Privacy Solution Series,” we are outlining various user-empowerment or user “self-help” tools that allow Internet users to better protect their privacy online-and especially to defeat tracking for online behavioral advertising purposes.  These tools and methods form an important part of a layered approach that we believe offers an effective alternative to government-mandated regulation of online privacy.

In some of the upcoming installments we will be exploring the privacy controls embedded in the major web browsers consumers use today: Microsoft’s Internet Explorer (IE) 8, the Mozilla Foundation’s Firefox 3, Google’s Chrome 1.0, and Apple’s Safari 4. In evaluating these browsers, we will examine three types of privacy features:

(1) cookie management controls; (2) private browsing; and (3) other privacy features

We will first be focusing on the default features and functions embedded in the browsers. We plan to do subsequent installments on the various downloadable “add-ons” available for browsers, as we already did for AdBlock Plus in the second installment of this series.

In this installment, we’ll be taking a look at the privacy-related features in the most popular browser in use today, Microsoft’s Internet Explorer. Specifically, we’ll be examining the most recent version of the browser, IE 8, Release Candidate 1. We’ll make it clear which features are new to IE 8 and those which are shared with IE 7.

Basic Background

Microsoft’s Internet Explorer browser was launched in 1995 and quickly became America’s most popular web browser, displacing Netscape’s Navigator browser. In recent years, IE has faced new challenges from the Mozilla Foundation’s “Firefox” browser, Apple’s “Safari”, the open source “Opera” browser, and others. (For an excellent history / timeline of web browsers, click here.) Despite these new challenges, IE still commands over 70% of the browser market. Like most other web browsers, Internet Explorer is free. So too are the features we are describing here.

Before we get further in the discussion of privacy controls, it’s important for readers to understand the difference between “first-party” and “third-party” content on webpages. Many webpages today contain a combination of content from many different websites, which enables powerful “Web 2.0” functionality like an interactive Google map displayed along with an address or a “Digg This” link in a blog post. Third-party content can also be used to track users across websites and to serve up advertising. All content loaded from the same domain as is displayed in the Address bar is first-party content. All content loaded from other domains is third-party content. Internet Explorer has a “Privacy Report” function that can show you the source for all the different content elements in the current webpage. To access it, select Webpage Privacy Policy from IE7’s Page menu or IE8’s View menu.

Basic Cookie Management Controls

To access Internet Explorer’s basic cookie management and privacy settings, open the “Tools” menu, click “Internet Options,” and then click on the “Privacy” tab to display the following options:

IE8 Internet Privacy Options

Users can configure the slider on the upper left-hand side of the window to establish their preferred level of cookie privacy. There are 6 options on the sliding scale from which to choose. Starting from the top of the slider bar:

(1)   ” Block all cookies” — Blocks IE from receiving any new cookies and blocks websites from reading any existing cookies on your computer. (Of course, that would greatly inconvenience users that regularly access websites that require information from the user, such as a Web-based email site that requires users to log in every time they access the website.)

(2)   ” High” — Blocks all cookies from websites that do not have a P3P compact privacy policy or that have a compact privacy policy which specifies that personally-identifiable information is used without your explicit consent. Cookies already on your computer can only be read by the site that created them.

(3)   ” Medium High” — “Blocks third-party cookies that do not have a compact privacy policy,” “Blocks third-party cookies that save information that can be used to contact you without explicit consent,” and “Blocks first-party cookies that save information that can be used to contact you without your implicit consent.”

(4)   ” Medium” — This setting “Blocks third-party cookies that do not have a compact privacy policy,” “Blocks third-party cookies that save information that can be used to contact you without your explicit consent,” and “Restricts first-party cookies that save information that can be used to contact you without your implicit consent.”

(5)   ” Low” — This setting “Blocks third-party cookies that do not have a compact privacy policy” and “Restricts third-party cookies that save information that can be used to contact you without implicit consent.”

(6)   ” Allow all cookies” — This setting allows all cookies from any website.

A P3P compact privacy policy is a machine-readable summary of the full P3P specification, which is a standardized method for explaining a website’s privacy policy. So when IE states that it will “block[] third-party cookies that save information that can be used to contact you without your explicit consent,” it means that the cookie will be blocked unless the site has a P3P compact privacy policy that either indicates that only non-identifiable (NOI) information is collected, or that for every data collection PURPOSE and every type of RECIPIENT that the website shares collected data with, the site’s policy is that the user must opt in (“explicitly consent”) to the practice.

When the slider bar is set anywhere other than the “High” and “Low” levels, users can also click the “Sites” button and then specify different cookie security levels for individual websites. The advantage of this approach is that it lets users create their own personal “white lists” and “black lists” of sites for which they either never want cookies blocked, or for which they always want cookies blocked. This increases the privacy-configurability of the browsing experience. For example, the following screen shows two sites that have been whitelisted and two hypothetical sites that have been blacklisted.

IE8 Per Site Privacy Actions

In addition, if the user wishes to manually delete their cookies, web browsing history, form data, personal passwords, or other stored information, they can do so on the “General” tab under the “Browsing History” section. Or, in the new IE 8, they can do so under the new “Safety” drop-down menu (in the Command toolbar) under the first option, “Delete Browser History.” They can also configure IE 8 so that all of this data is deleted each time the browser is closed (essentially converting “persistent cookies” into “session cookies,” concepts Adam Marcus has explained previously). The following screen shows how this user is choosing to delete just their temporary Internet files, cookies, and browsing history. Favorite websites are websites the user has bookmarked.

IE8 Delete Browsing History

Using these controls, a particularly privacy-sensitive user who only trusted two or three sites-say, their bank and their employer’s website-could allow cookies for only those sites and block cookies for all other websites. Again, this assumes that they do not mind the potential hassles associated with logging-in to many other sites each time they visit or losing custom preferences that would otherwise be stored in a cookie.

Advanced Cookie Management – “InPrivate Filtering”

Microsoft explains its InPrivate Filtering feature as follows:

Today websites increasingly pull content in from multiple sources, providing tremendous value to consumer and sites alike. Users are often not aware that some content, images, ads and analytics are being provided from third party websites or that these websites have the ability to potentially track their behavior across multiple websites. InPrivate Filtering provides users an added level of control and choice about the information that third party websites can potentially use to track browsing activity.

InPrivate Filtering is off by default and must be enabled on a per-session basis. To use this feature, select InPrivate Filtering from the Safety menu.

In “Automatically Block” mode, InPrivate Filtering will automatically block a site if IE finds that site’s content embedded in more than a user-specified number of other sites (the default is 10) visited by the user.  You can also manually control which sites are blocked, and import and export your list of white/blacklisted sites to share that list with others.

The beta version of IE8 included a subscriptions feature that would have allowed users to automatically receive updated white or blacklists from others-much like the subscription feature in AdBlock Plus that we discussed previously. However, this functionality was removed in the “Release Candidate 1” version of IE8 (released Jan. 26, 2009) for unspecified reasons.  While we recognize that not every beta feature makes it into final releases because of challenges in implementation, we very much hope Microsoft will ultimately add the subscription feature to Internet Explorer 8.  InPrivate Filtering goes a long way in empowering truly privacy-sensitive users to take more granular control over their own privacy, but a subscription feature would allow less sophisticated users to rely on groups or other individuals they trust to help them avoid specific sites according to their concerns about privacy or security.  Indeed, we hope that other browser manufacturers consider incorporating such tools into their browsers.  Perhaps the privacy advocates who currently focus on inventing one-size-fits-all regulatory or legislative solutions could channel their enthusiasm about user privacy into actually developing whitelists and blacklists.

Private Browsing

Another new privacy-related feature in Internet Explorer 8 is called InPrivate Browsing mode (akin to “Incognito” mode in Chrome), which protects so-called “over the shoulder” privacy, although that’s a somewhat misleading term. By not saving any record of your web browsing while InPrivate Browsing mode is turned on, this feature ensures that others with access to your computer will not know what websites you have accessed. Some people like being able to refer to their browser history and don’t want to delete all of their cookies, but want to hide all traces of some of their browsing activities-such as shopping online for a surprise gift, searching for information about a medical condition you don’t want to disclose and, most obviously, enjoying pornography).

When the InPrivate Browsing mode is enabled, none of the varieties of “browsing history” data is saved-but none of your previous history is deleted, either. This comes in handy because, if someone with direct access to your computer is monitoring your browser history to see what you’ve been up to, deleting all of your browsing history would suggest that you’ve been doing something you wanted to hide. But InPrivate Browsing mode allows you to surf anonymously when desired-without making it obvious that you’re doing so. Parents who are concerned about their kids using the InPrivate Browsing mode can use the parental controls in Windows Vista to disable it. But there does not appear to be a way to disable InPrivate Browsing on Windows XP.

Below is a screenshot of the InPrivate Browsing mode-which, again, can be enabled by clicking on the new “Safety” drop-down menu in IE 8 and selecting “InPrivate Browsing.”

IE8 InPrivate Browsing

While InPrivate Browsing is active, the following takes place:

  • New cookies are not stored:
    • All new cookies become “session” cookies
    • Existing cookies can still be read
    • The new DOM storage feature behaves the same way
    • New entries will not be saved to the browsing history
  • New temporary Internet files will be deleted when the Private Browsing window is closed
  • The following data will not be stored:
    • Form data
    • Passwords
    • Addresses typed into the address bar
    • Queries entered into the search box
    • Visited links

Other Privacy Features

  • SmartScreen Filter – Called “Phishing filter” in IE 7, this feature monitors and blocks links to malicious downloads. In IE 8, it also monitors links distributed via email and instant messaging (assuming IE is the default Web browser).
  • Cross Site Scripting (XSS) filter – Cross-site scripting attacks allow hackers to “inject” malicious scripts into trusted websites, which can then steal the account credentials of users who access these websites. XSS attacks are dangerous because everything looks fine to users and the attackers can gain almost complete access to users’ computers. The XSS filter in IE constantly scans the data received from websites to determine if there is a likely XSS attack and re-writes the data to neutralize the attack.
  • ActiveX Opt-In – By default, ActiveX Opt-In disables most ActiveX controls. When a Web page tries to run an ActiveX control, the following text is displayed in an Information Bar: “This website wants to run the following add-on ‘ABC Control’ from ‘XYZ Publisher.’ If you trust the website and the add-on and want to allow it to run, click here …” The user can then choose whether or not to run the ActiveX control.
  • Per-Site ActiveX – If a website tries to access an installed ActiveX control that is not permitted to run on the website, this new feature in IE 8 gives the user the option of blocking the attempt, allowing the ActiveX control for the current site, or to allow all websites to access the ActiveX control.
  • Domain Highlighting – The domain name of the site you’re viewing is highlighted in the address bar. By making it clearer to the user which website they’re accessing, this feature serves to protect users against phishing attacks from domain names that look like trusted domain names (e.g., www.paypal.com.hax0r.net, which is not PayPal’s actual website).

Additional Reading / Links

]]>
https://techliberation.com/2009/03/06/privacy-solutions-series-part-3-internet-explorer-privacy-features/feed/ 615 12538
Nuts & Bolts: A User’s Guide to ISP Network Management https://techliberation.com/2009/02/24/nuts-bolts-a-user%e2%80%99s-guide-to-isp-network-management/ https://techliberation.com/2009/02/24/nuts-bolts-a-user%e2%80%99s-guide-to-isp-network-management/#comments Tue, 24 Feb 2009 15:19:18 +0000 http://techliberation.com/?p=16872

This is the third in a series of articles about Internet technologies. The first article was about web cookies. The second article explained the network neutrality debate. This article explains network management systems. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed.

There has been lots of talk on blogs recently about Cox Communications’ network management trial. Some see this as another nail in Network Neutrality’s coffin, while many users are just hoping for anything that will make their network connection faster.

As I explained previously, the Network Neutrality debate is best understood as a debate about how to best manage traffic on the Internet.

Those who advocate for network neutrality are actually advocating for legislation that would set strict rules for how ISPs manage traffic. They essentially want to re-classify ISPs as common carriers. Those on the other side of the debate believe that the government is unable to set rules for something that changes as rapidly as the Internet. They want ISPs to have complete freedom to experiment with different business models and believe that anything that approaches real discrimination will be swiftly dealt with by market forces. But what both sides seem to ignore is that traffic must be managed. Even if every connection and router on the Internet is built to carry ten times the expected capacity, there will be occasional outages. It is foolish to believe that routers will never become overburdened–they already do. Current routers already have a system for prioritizing packets when they get overburdened; they just drop all packets received after their buffers are full. This system is fair, but it’s not optimized. The network neutrality debate needs to shift to a debate on what should be prioritized and how. One way packets can be prioritized is by the type of data they’re carrying. Applications that require low latency would be prioritized and those that don’t require low latency would not be prioritized.

Cox’s Internet service, like most Cable internet services, was built on top of its cable TV service, which was designed to share TV signals in only one direction to households in a relatively small geographic area. Cable companies segment their networks into neighborhoods or “nodes,” with each node connected to a Cable Modem Termination System (CMTS). The size of each node can vary from a few thousand households to a few hundred thousand households. All cable Internet customers connected to a single node share the available bandwidth.

Here’s a simple analogy: Imagine you buy a house with your new spouse. The house has a tankless water heater that can provide an unlimited supply of hot water at a rate of 2-5 gallons per minute, which is adequate for the two of you. When you have houseguests, you manage the limited flow rate by having some people shower in the morning and some people shower in the evening. Then you have kids. As your kids grow up, you all need to shower around the same time in the morning and you experience hot water outages more and more often. You’re faced with two options: Continue to restrict how many people can shower at any one time, or buy a larger-capacity water heater. Substitute broadband for hot water and you’ve got the situation that ISPs are in.

As cable companies add more cable Internet subscribers and individual households use more bandwidth, the cable companies have essentially three options:

  • Segment their networks so each node is serving fewer users; or
  • Deploy new technology to increase the bandwidth of their CMTSes (e.g. DOCSIS 3.0);
  • Use the existing bandwidth more “efficiently.”

Using a network more efficiently means deploying some sort of “network management” system. Even though tankless water heaters can supply an endless amount of hot water, if you connect too many sinks and showers to a single heater and turn them all on at once, you will have a (temporary) hot water shortage. That’s why it’s usually not a good idea to run the dishwasher or washing machine when you’re taking a shower. Similarly, bandwidth on the Internet is only limited by the electricity needed to keep the routers running, but when everyone tries to use high-bandwidth applications (like streaming video) simultaneously, the network gets congested and slows down.

When thinking of hot water systems, washing machines and dishwashers can be thought of as non-time-sensitive uses of hot water because it’s usually not important when they’re done, as long as they’re done within a few hours of your preferred time. On the other hand, when you go to wash your hands, you want hot water immediately. This would be an extremely time-sensitive use. Showers probably fall somewhere in the middle. The same variety of time-sensitivity also applies to Internet applications.

When done right, network management is nothing to fear. It allows ISPs to provide better service to more customers at a lower cost. Hopefully, those customers will be happier because their time-sensitive applications will have enough bandwidth. And the lower costs to the ISP may result in lower prices to customers. For customers who want/need more bandwidth than average, ISPs can and do offer different levels of service.

Even in areas where the incumbent broadband ISP does not face any serious competition, network management is good for users: Without network management, it may be completely impossible on an overloaded network to make a VoIP call, remotely connect to your office network, or play online multi-player games.

Cox’s network management policy seems eminently reasonable. First, it only affects “upstream” traffic (i.e. traffic sent from users’ computers). The new system classifies all traffic as either “time-sensitive” (prioritized) or “less time-sensitive” (unprioritized). Unprioritized traffic includes FTP uploads, peer-to-peer file sharing, and Usenet posts. Most importantly, “Any traffic that is not specifically classified will be treated as time-sensitive.” Thus, the policy will not affect new Internet applications and anyone who encrypts their traffic (because using encryption prevents your ISP from being able to determine which application you’re using).

If you’ve noticed your Internet connection has suddenly slowed, your ISP’s new network management policy is probably not the cause. It may simply be that there are more households sharing the same last-mile connection and those households are using it more. What is needed are new metrics to compare broadband offerings. Heavy users of peer-to-peer file transfer applications may indeed see faster speeds by switching to an ISP that doesn’t use network management. But if all such users in a particular area switch to that ISP, the ISP’s network will likely quickly become overloaded and have to implement network management practices themselves. Just as insurance companies and financial institutions must avoid setting policies that attract the sickest or least-credit-worthy customers, ISPs may face the same problem of “adverse selection” by attracting the most bandwidth-intensive users if they do not either impose some form of network management or charge a premium for not limiting bandwidth.

New Metrics

Choosing an ISP based only on price and downstream rate is simply not enough anymore. The old adage that “you get what you pay for” still applies. The first thing bandwidth shoppers that have a choice between cable Internet service and some other form of Internet service like DSL or fiber need to realize is that only cable Internet services share the last-mile connection among multiple households. DSL and fiber services do not. Next, you need to understand that the quoted transfer rate is not guaranteed; it’s simply the fastest speed you can expect to obtain under ideal conditions–which may only occur when all your neighbors have their computers turned off). Beyond that, the following are some terms that should help you decide between ISPs and the different packages offered by each.

To return to the water heater analogy, if you move into an apartment building with a central tankless water heater, knowing the water heater’s flow rate is meaningless if you don’t know how many other people are living in the building and sharing the same water heater. Of course some people take longer showers than others. If how much hot water you get for your morning shower is really important to you, you may be better off finding an apartment with your own private water heater. But for those that will have to share a water heater with others, you’ll want to know the capacity of the water heater and the number of people it will be shared with.

  • Bandwidth – Bandwidth measurements are exactly like the flow rate measurement for tankless water heaters: It’s a measure of how much of some quantity (water or data) the system can deliver over a fixed period of time. Tankless water heaters are measured in gallons per minute. Bandwidth is measured in megabits per second. NOTE: Most telecommunications equipment measures quantities in bits (and multiples of bits such as kilobits, megabits, and gigabits) but most storage devices measure quantities in bytes (and kilobytes, megabytes, and gigabytes). When abbreviated, MB means megabyte and Mb means megabit. There are 8 bits in a byte, so a high-quality photo from a 6 megapixel camera (approximately 2.2 megabytes in size) would take about 3 seconds to transfer across an otherwise unused 6 megabit per second (Mbps) connection. For more about bandwidth and how it relates to latency, which is a truer measure of actual speed, refer to my earlier article in this series, “Some basic about edge caching, network management, & Net neutrality.”
  • Powerboost – This technology, now used by a number of ISPs, gives a speed boost to the first few megabytes of each upload and download. This is great for casual web surfing, but for large files the boost isn’t all that significant. With one ISP’s package, the speed boost is from 6Mbps to 15Mbps for only the first 10Mb of each download. This saves a maximum of 8 seconds per download regardless of how big the file is. When comparing packages, be sure to compare the actual download speeds as well as the boosted download speeds. In some cases, the actual download speeds are not reported in the ISPs advertising and you need to call to find them out.
  • Contention Ratio – This is the ratio of the total bandwidth promised to all users (based on their service plan) to the actual bandwidth available on the connection. If there are 2000 households, each with a 10Mbps plan, sharing a last-mile connection with a total capacity of 1Gbps, the contention ratio would be 20:1. To go back to water heaters: If each of 20 apartments in a single building is promised hot water at a flow rate of 3 gallons per minute, the building would need a heater with a flow rate of 60 gallons per minute to meet the demand if everyone takes a shower at the same time. That would result in a contention ration of 1:1. But if the building tries to save money by installing a cheaper heater with a flow rate of only 30 gallons per minute, the contention ratio would drop to 2:1. ISPs in the U.S. do not normally disclose contention ratios, but the practice is common in the U.K, where leading ISP BT has guidelines requiring a ratio between 20:1 and 50:1. There’s no way to determine your own contention ratio, but it might be worth asking the next time you’re shopping around for broadband service, if for no other reason than to raise awareness of this important metric.

In conclusion, there are a number of potential causes for a slow Internet connection and a number of possible solutions–but the deployment of network management systems by ISPs is probably not to blame. If anything, most users on such ISPs should notice their connections become faster for most applications. If you’ve ever had no hot water to wash your hands because someone was running the dishwasher, you’ll understand why network management is important. As long as an ISP isn’t using its network management system to favor one application over a competitor (e.g. prioritizing its own voice-over-IP (VoIP) service but not prioritizing other VoIP services), network neutrality advocates should have no cause for alarm. As explained above, Cox’s new system meets this test.

]]>
https://techliberation.com/2009/02/24/nuts-bolts-a-user%e2%80%99s-guide-to-isp-network-management/feed/ 16 16872
Nuts & Bolts: Everything You Wanted To Know About Cookies But Were Afraid To Ask https://techliberation.com/2009/01/27/nuts-and-bolts-everything-you-wanted-to-know-about-cookies-but-were-afraid-to-ask/ https://techliberation.com/2009/01/27/nuts-and-bolts-everything-you-wanted-to-know-about-cookies-but-were-afraid-to-ask/#comments Tue, 27 Jan 2009 12:25:06 +0000 http://techliberation.com/?p=12932

As a means of introducing myself to TLF readers, this is an article that I wrote for the PFF blog in September that has not been previously mentioned on the TLF. Most of my other PFF blog posts have been cross-posted by Adam Thierer or Berin Szoka, but I’ve taken ownership of those posts so they appear on my TLF author page.

This is the first in a series of articles that will focus directly on technology instead of technology policy. With an average age of 57, most members of Congress were at least 30 when the IBM PC was introduced in 1981. So it is not surprising that lawmakers have difficulty with cutting-edge technology. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed, but no insult to the reader’s intelligence is intended.

This article focuses on cookies–not the cookies you eat, but the cookies associated with browsing the World Wide Web. There has been public concern over the privacy implications of cookies since they were first developed. But to understand them , you must know a bit of history.

According to Tim Berners Lee, the creator of the World Wide Web, “[g]etting people to put data on the Web often was a question of getting them to change perspective, from thinking of the user’s access to it not as interaction with, say, an online library system, but as navigation th[r]ough a set of virtual pages in some abstract space. In this concept, users could bookmark any place and return to it, and could make links into any place from another document. This would give a feeling of persistence, of an ongoing existence, to each page.”[1. Tim Berners-Lee, Weaving The Web: The Original Design and Ultimate Destiny of the World Wide Web. p. 37. Harper Business (2000).] The Web has changed quite a bit since the early 1990s.

Today, websites are much more dynamic and interactive, with every page being customized for each user. Such customization could include automatically selecting the appropriate language for the user based on where they’re located, displaying only content that has been added since the last time the user visited the site, remembering a user who wants to stay logged into a site from a particular computer, or keeping track of items in a virtual shopping cart. These features are simply not possible without the ability for a website to distinguish one user from another and to remember a user as they navigate from one page to another. Today, in the Web 2.0 era, instead of Web pages having persistence (as Berners-Lee described), we have dynamic pages and “user-persistence.”

This paper describes the various methods websites can use to enable user-persistence and how this affects user privacy. But the first thing the reader must realize is that the Web was not initially designed to be interactive; indeed, as the quote above shows, the goal was the exact opposite. Yet interactivity is critical to many of the things we all take for granted about web content and services today.

Stateful Sessions

On the original World Wide Web designed by Berners-Lee (Web 1.0), Web servers responded to each client request without relating that request to previous requests. There was no need to remember what other pages the user had requested because the requests were for static pages. But if you’ve used a Web-based email system like Gmail, Hotmail, Yahoo! Mail, etc., you know that once you log in, the service remembers who you are as you click from message to message. When a website can keep track of a user as they move from page to page within a site it is called a “stateful session.” The website doesn’t necessarily need to know anything about the user, it just needs to be able to distinguish that particular user from all other users. For example, if you go to an online store and place a few items in your virtual shopping cart, the site still does not know your name, email address, or billing information. But it does know what you’ve placed in your cart–or more precisely, it knows what someone using your browser has placed placed in a particular cart. If you leave the site before buying anything and then go back an hour later, it’s possible that the site will have completely forgotten about you. In that case, the unique identifier persists during your “session” on the site, but it doesn’t persist between sessions.

URLs and HTTP Requests

Web 1.0 sites achieve Web page persistence by having a unique address or Uniform Resource Locator (URL) for each Web page, which is displayed in the address bar at the top of your browser as you browse the web. For example, http://www.pff.org/about/ is a simple URL pointing to a specific Web page. Every user that visits the PFF site at www.pff.org and clicks on the “About” link will be taken to the exact same page.

URLs can also store information about the user. For example, if you search for “test” on Google, the URL of the resulting page may look like the following: http://www.google.com/search?q=test&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a.[2. http://googlesystem.blogspot.com/2006/07/meaning-of-parameters-in-google-query.html] The URL contains a number of different pieces of data, separated by ampersands. There is the search query (“q=test”), the character encoding of the input (“ie=utf-8”), the character encoding of the output (“oe=utf-8”), the type and language of the client (“rls=org.mozilla:en-US:official”), and the Web browser used (“client=firefox-a”). None of this information can be used to uniquely identify the user, but this basic example illustrates how URLs can be used to specify more than simply static Web pages–and how some information can be remembered as a user navigates a website even without using cookies. Knowing how this works, you can create your own advanced searches or change the way the results are formatted (e.g., changing the language).

So how did Google know I speak English and use Firefox? That information is included in the HTTP request that my Web browser sends to the Google Web server when it requests a page. HTTP requests specify (among a few other more technical things) the desired language and a “User-Agent” field that includes the name of the browser and sometimes your operating system. This information allows websites to customize their content for different Web browsers (e.g., to ensure that it displays properly). HTTP requests also include your IP address so the Web server knows where to send its response, and geotagging allows Web servers to associate an IP address with a geographic area (though the area is rarely more accurate than the country or state). HTTP requests can also contain HTTP cookies.

HTTP Cookies

URLs can be used to uniquely identify individual users and allow stateful sessions, but unless a user bookmarks the URL containing their unique identifier, there is no way for the site to associate the same unique identifier with the same user on subsequent visits. Another option is to have users create an account and then log in each time they access the site. The website could then include the user’s unique ID in the URL on subsequent pages, so that the user only needs to log in once per session. Having to bookmark or create an account on every site you want to remember you would quickly become unmanageable. It would be nice if mapping and weather websites, for example, just remembered your location. It would be nice if the blogs you follow remembered what post you last read and displayed only unread posts when you next visit their site. What was needed at this point in the Web’s evolution was a way for websites to automatically store a unique identifier on the user’s computer and send it back to the website automatically[3. A site could also try to uniquely identify users by the IP address of their computer, but this is unreliable as there can be many computers behind a firewall sharing a single IP address.]—which is precisely what a cookie does.

To quote Wikipedia,

“HTTP cookies, or more commonly referred to as Web cookies, tracking cookies or just cookies, are parcels of text sent by a server to a Web client (usually a browser) and then sent back unchanged by the client each time it accesses that server. HTTP cookies are used for authenticating, session tracking (state maintenance), and maintaining specific information about users, such as site preferences or the contents of their electronic shopping carts.”

A cookie can contain one or more pieces of data, a description and/or URL for an online description of the cookie, how long the Web browser should store the cookie, and the domain, path, and port that the cookie should be limited to. Cookies can be set to expire after a specified interval, or can be “session cookies” that will expire when the Web browser is closed. When a cookie expires, it is deleted by the Web browser. Unexpired cookies are automatically sent back to the originating Web server when the Web browser makes any subsequent requests to the same server (the same domain, path, and port).

Neither Web servers nor Web browsers are required to support cookies, but a server may refuse to work with a Web browser that does not return the cookie(s) it sends. Cookies do not contain any executable code and are extremely small in size. They only contain data sent by the website and the data is not changed by the client computer, so there generally should be no privacy concerns about sending a cookie back to the website that created it (“First-party cookies”).

First-Party and Third-Party Cookies

Cookies are normally only sent to the server setting them or a server in the same domain ( e.g., a cookie set by mail.google.com could be shared with calendar.google.com). These are called first-party cookies because they’re set by the site displayed in the address bar of the Web browser. These cookies are typically used to tailor the website for the user. Third-party cookies, on the other hand, are typically used by advertising networks to track users across multiple Web sites where the networks have placed advertising–which allows the advertising network to target subsequent advertisements to the user’s presumed interests and also to limit the number of times a user is shown a particular ad. This targeting allows the delivery of “smarter” advertising that is less annoying and more informative to the user–and therefore more valuable to the advertiser, who will be willing to pay websites more for their ad space. However, this targeting also raises privacy concerns.

It is trivial for a Web page to contain images or other components stored on servers in other domains (“third-party elements”). In fact, it is often easier to link to an image already hosted online elsewhere than it is to host an image on your own Website.

Examples:

  • Typical first-party embedded image:
  • Typical third-party embedded image:

Whenever a Web browser loads a Web page or component of a Web page, it will include in its request for that component any cookies already stored on the user’s computer that are associated with the domain hosting the content. The Web server, in turn, can send a cookie or update a cookie already existing on the user’s computer.

Although your Web browser will not send a third-party cookie to the first-party Web server (and it won’t send a first-party cookie to the third-party Web server), the first-party Web server can send information to the third-party Web server by embedding it in the URL for the third-party content. The most common form of this communication between the sites you visit and the sites they rely on for content or ads is called a “web bug”–a small (usually 1 pixel by 1 pixel) graphic not meant to be noticed by the user. Its purpose is to cause the user’s Web browser to load the third-party embedded content from the external Web server, which will allow the third party (usually an advertising network) to track the user.

  • Example third-party embedded web bug:

While this all may seem scary and invasive,the fact that a website or ad network can uniquely identify your browser does not mean that they have any clue who you are. Even if you provide your name, email address, or other personally-identifiable information to the first-party Web site, most sites’ privacy policies state that they will not share this information with their advertising partners. To use a real-world analogy, third-party advertising is equivalent to a marketer in a mall watching you come out of a music store and then offering you a flyer for a concert: The marketer may know that you’re interested in music (because you were shopping at the music store), but they have no idea who you are. And as my colleagues Adam Thierer and Berin Szoka explained in their post on Adblock Plus, websites (especially smaller independent websites) depend on advertising as a source of revenue and to cover their overhead costs.

Alternatives to Cookies

Cookies are not the only way websites can do stateful sessions. As has already been mentioned, Websites can put unique identifiers in URLs. But custom URLs don’t last between sessions. Websites that need to remember users ( e.g., websites that charge a fee for access) can require users to create an account and log into the site every time they use it.

But most websites do not require users to create an account and log in every time. And more and more users are configuring their Web browsers to delete all cookies when they close the browser. In response, Web site operators have found other methods to uniquely identify users by storing a unique identifier on users’ computers.

The cookie alternatives listed below are not any more or less invasive of privacy than cookies if the user is aware of them and manages them the same way they manage cookies. But most Web browsers don’t give users the same amount of control over cookie alternatives that they do over cookies, and few users know about these alternatives.

Per-session cookie alternatives – These cookie alternatives are not saved to disk and thus are not accessible after you close your Web browser.

  • Hidden form fields – Web pages can contain hidden Web forms that submit data back to the Web server when an on-screen button is pressed. This method is quite limited because it requires the user to click a specific button, and there is no method for saving data after you’ve navigated away from the site. Beyond these limitations, the only way to detect hidden form fields is to inspect the HTML code for a page. There is also no easy way to block hidden form fields.
  • window.name – JavaScript embedded in a Web page can set or read the this internal value that’s not really used for anything else. The value can be up to 32 megabytes in size and once set a value can be accessed by any Web site. Although the only way to detect this is to inspect the HTML code for a page, you can disable JavaScript.

Persistent cookie alternatives – These cookie alternatives are like cookies in that they are saved on your computer and can be accessed even after you’ve closed your Web browser.

  • Flash Cookies – Also known as Local Shared Objects, Flash cookies require Adobe Flash to be installed on your computer. Whereas HTTP cookies are limited to 4 kilobytes, Flash cookies can contain up to 100 kilobytes by default and can contain an unlimited amount of data if the user desires. To view and delete the Flash cookies stored on your computer, go to this page (although accessed via a Web page, the Flash cookies shown are stored on your computer). You can also permanently disable Flash cookies on that page.
  • DOM Storage – DOM storage was designed specifically to allow Web 2.0 applications to work offline, saving data locally when they are unable to access the host website and to save data that would otherwise be lost if a page is accidentally reloaded. DOM storage is currently only implemented in Firefox (and Internet Explorer 8 Beta). If cookies are disabled, DOM storage is also disabled. Users can also manually disable DOM storage even when cookies are enabled.
  • userData behavior – The userData behavior does for Internet Explorer what DOM storage does for Firefox. Each “document” is limited to 128 kilobytes of storage, with a per-domain limit of 1024 kilobytes. The data is stored in Internet Explorer’s cache and are deleted when you delete cookies using the Delete Browsing History dialog box.

Conclusion

This article should give you a better sense of what cookies are used for and how they work. You should now see that per-session cookies and cookie alternatives are completely harmless. Persistent cookies (and cookie alternatives) can make your Web browsing a bit easier, but deleting them will not (in most cases) cause any problems. If you are concerned about your privacy, you will need to do a bit more than just delete cookies–you also need to delete or disable the above-mentioned cookie alternatives.

]]>
https://techliberation.com/2009/01/27/nuts-and-bolts-everything-you-wanted-to-know-about-cookies-but-were-afraid-to-ask/feed/ 16 12932
Some basics about edge caching, network management, & Net neutrality https://techliberation.com/2008/12/18/some-basics-about-edge-caching-network-management-net-neutrality/ https://techliberation.com/2008/12/18/some-basics-about-edge-caching-network-management-net-neutrality/#comments Thu, 18 Dec 2008 19:44:59 +0000 http://techliberation.com/?p=15036

The introduction below was originally written by Adam Thierer, but now that I (Adam Marcus) am a full-fledged TLF member, I have taken authorship.


My PFF colleague Bret Swanson had a nice post here yesterday talking about the evolution of the debate over edge caching and network management (“Bandwidth, Storewidth, and Net Neutrality“), but I also wanted to draw your attention to related essay by another PFF colleague of mine. Adam Marcus, who serves as a Research Fellow and Senior Technologist at PFF, has started a wonderful series of “Nuts & Bolts” essays meant to “provide a solid technical foundation for the policy debates that new technologies often trigger.” His latest essay is on Network neutrality and edge caching, which has been the topic of heated discussion since the Wall Street Journal’s front-page story on Monday that Google had approached major cable and phone companies and supposedly proposed to create a fast lane for its own content.

Anyway, Adam Marcus gave me permission to reprint the article in its entirety down below. I hope you find this background information useful.


Nuts and Bolts: Network neutrality and edge caching

by Adam Marcus, Progress & Freedom Foundation

December 17, 2008

This is the second in a series of articles about Internet technologies. The first article was about web cookies. This article explains the network neutrality debate. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed.

To understand the network neutrality debate, you must first understand bandwidth and latency. There are lots of analogies equating the Internet to roadways, but it’s because the analogies are quite instructive. For example, if one or two people need to travel across town, a fast sports car is probably the fastest method. But if 50 people need to travel across town, it may require 25 trips in a single sports car. So a bus which can transport all 50 people in a single trip may be “faster” overall. The sports car is faster, but the bus has more capacity. Bandwidth is a measure of capacity, of how much data can be transmitted in a fixed period of time. It is usually measured in Megabits per second (Mbps). Latency is a measure of speed, of the time it takes a single packet data to travel between two points. It is usually measured in milliseconds. The “speeds” that ISPs advertise have nothing to do with latency; they’re actually referring to bandwidth. ISPs don’t advertise latency because its different for each different site you’re trying to reach. The Internet consists of devices and wires connecting those devices. The speed of data along the wires is fixed–there are no fast lanes and slow lanes. The only way to increase speeds is to either travel a shorter path or to get priority at the routers, the virtual traffic lights of the Internet. ISPs advertise bandwidth because with more bandwidth, more data can get to you in fewer trips, making your broadband connection seem much faster than a dial-up connection.

Sometimes latency and bandwidth are important and sometimes they’re not that important. The typical response time between any two points on the Internet is 1/5th of one second, so the difference between a relatively fast and relatively slow connection isn’t much. If you’re sending an email (without any attachments) or chatting with someone using an Instant Messaging program, you’re not using much bandwidth and if your messages are delayed by a second it’s probably not a problem. Or when Microsoft Windows is downloading system updates in the background, whether the download completes in a few minutes or an hour really doesn’t matter–as long as it completes. The emails and IMs are low-bandwidth and the system updates are usually high-bandwidth, but in both of these examples, latency is not that important. But if you’re playing a real-time online multiplayer game, making a VoIP phone call, videoconferencing, or remotely connecting to another computer using pcAnywhere, GoToMyPC, or Remote Desktop Services, both bandwidth and latency are important. Without a high-bandwidth low-latency connection, you’ll experience drop-outs and lag. NOTE – Latency is a measure of time, so the lower the latency the better.

Latency is most affected by the Internet equivalent to traffic lights: routers. Data transmitted over the Internet is sent in packets which contain a header that specifies, among a few other things, the IP address of the intended destination computer. Between every connection sits a router. For every packet that arrives at every router, the router must look at its header to determine where to send it, and then forward the packet out along the proper connection. Normally, routers inspect and forward packets with almost no delay. But when there are too many packets for a router to handle or the tubes get filled, the packets are temporarily queued in the router’s memory. This queuing imposes some delay. If the memory becomes full, the router drops (deletes) some of the packets and tries to keep going. If the sending computer doesn’t get a response in a certain amount of time, it assumes the packet has been dropped and sends it again, resulting in even more delay. On average, about 6% of packets are lost.

One way to deal with overloaded routers is to simply install more and bigger routers. Another method is to build more connections so packets don’t have to travel through as many routers. But both of these options are costly and it’s not clear whether simply increasing capacity will be enough to keep pace with increasing demand. A third option is to prioritize the packets. Prioritizing packets is kind of like the Mobile InfraRed Transmitter (MIRT) system that allows emergency response vehicles (e.g. fire, police, and EMS) to immediately turn specially-equipped traffic lights green. Most people would probably agree that this form of traffic priortization is a good idea. But when referring to the Internet, talk of traffic prioritization starts arguments.

The Network Neutrality Debate: What’s It All About

The network neutrality debate is a debate about the best method to manage traffic on the Internet. Those who advocate for network neutrality are actually advocating for legislation that would set strict rules for how ISPs manage traffic. They essentially want to re-classify ISPs as common carriers. Those on the other side of the debate believe that the government is unable to set rules for something that changes as rapidly as the Internet. They want ISPs to have complete freedom to experiment with different business models and believe that anything that approaches real discrimination will be swiftly dealt with by market forces.

But what both sides seem to ignore is that traffic must be managed. Even if every connection and router on the Internet is built to carry ten times the expected capacity, there will be occassional outages. It is foolish to believe that routers will never become overburdened–they already do. Current routers already have a system for prioritizing packets when they get overburdened; they just drop all packets received after their buffers are full. This system is fair, but it’s not optimized.

The network neutrality debate needs to shift to a debate on what should be prioritized and how. One way packets can be prioritized is by the type of data they’re carrying. Applications that require low latency would be prioritized and those that don’t require low latency would not be prioritized. But who makes the determinations? What happens if someone hacks their computer to prioritize packets that shouldn’t be? Another method is for ISPs to offer prioritization for a fee. ISPs could determine who should get prioritization based on the source or destination IP address in the packet header, or content providers could pay ISPs to prioritize only packets they tag with a special marker.

Opponents of network neutrality mandates argue that it’s simply not feasible to increase capacity to the extend that would be necessary without prioritization. They believe that with prioritization, they will be able charge more for faster access to those willing to pay, and the increased revenue will provide the funding necessary to upgrade the networks, which will benefit everyone. As the saying goes, a rising tide lifts all boats. Network neutrality advocates fear that if ISPs are allowed to charge for prioritization, they will have no incentive to increase speeds for those who don’t pay for prioritization. While that may be true, price discrimination is very different from other forms of discrimination. It would be a real shame if the net neutrality debate over latency hampered efforts to increase bandwidth. Even common carriers were not restricted from setting different prices for different classes of service, they simply had to offer the same rates to all comers. If those who claim the Internet should be a completely level playing field applied the same logic to the phone system, toll-free numbers wouldn’t be allowed.

Edge Caching: What It Is and Isn’t

Monday’s Wall Street Journal ran an article suggesting that Google is abandoning its stance as an advocate for Network Neutrality because of a plan to set up edge caching servers. Edge caching is just a way to more efficiently balance the costs of storage space and bandwidth in an attempt to decrease latency. It a way to move content “closer” to the end-users that view it to avoid the latency that occurs as packets traverse longer distances across the network.

To continue the roadways analogy, imagine the Internet arranged like a city. The end-users are all in the suburbs and the data they want to access is downtown in the network’s “core.” With this model, every request from a user needs to “commute” from the suburbs to the core, and the requested data needs to then travel from the core all the way back to the suburbs. Just like companies realized that setting up satellite offices nearer to its workers would decrease commuting times and increase productivity, content providers have realized that setting up edge caching servers at major ISPs decreases latency and saves on bandwidth costs.

Edge caching doesn’t work for all types of Internet content. If the content changes rapidly, edge caching doesn’t save much bandwidth because you’re constantly pushing new content to the edge servers. But for popular YouTube videos, edge caching is a great way for Google to save on bandwidth costs. Before Google bought YouTube, YouTube outsourced the hosting of its videos to edge caching provider LimeLight. So its no surprise that Google is now looking to do the same with its own edge caching servers.

The fact that Google can afford to set up edge caching servers around the network does give it a bit of an advantage. But the advantage is mostly a savings in bandwidth costs for the content provider. The use of edge servers is meant to be almost unperceptable to users. Accessing content from edge servers may be a bit faster for users, but nobody is being discriminated against and most content on the Internet is not latency-sensitive. In the example of Internet video, the difference between playing a video hosted on an edge caching server versus playing video from a server located far away may be just a matter of a few seconds delay before the video begins playing.

Some, like the Wall Street Journal, argue that even edge caching violates the net neutrality principle of the Internet being a level playing field. I would suggest that only discriminatory practices, such as an ISP offering packet prioritization to only some companies, should be considered a violation of net neutrality principles.

As Google points out, other companies are free to set up their own edge caching servers or use one of the many companies that offer edge caching services. There have been economies of scale in other industries for generations. The fact that edge caching provides economies of scale for Internet content providers is not a game changer. On the Internet, just as in other media industries, it’s not who can get their goods to market the fastest, it’s whose content best satisfies their audiences.

— Adam Marcus (adamm@pff.org)

]]>
https://techliberation.com/2008/12/18/some-basics-about-edge-caching-network-management-net-neutrality/feed/ 7 15036
Still Cloudy on Cloud Computing: A Matrix to Guide the Coming Policy Debates https://techliberation.com/2008/09/12/still-cloudy-on-cloud-computing-a-matrix-to-guide-the-coming-policy-debates/ https://techliberation.com/2008/09/12/still-cloudy-on-cloud-computing-a-matrix-to-guide-the-coming-policy-debates/#respond Fri, 12 Sep 2008 22:41:42 +0000 http://techliberation.com/?p=12701

The introduction below was originally written by Berin Szoka, but now that I (Adam Marcus) am a full-fledged TLF member, I have taken authorship.


Adam Marcus, our exceptionally tech-savvy new research assistant at PFF, has published his first piece at the PFF blog, which I reprint here for your edification.

Today Google’s DC office hosted an interesting panel on cloud computing.  What was missing was a good definition of what “cloud computing” actually is.

While Wikipedia has its own broad definition of cloud computing, many think of cloud computing more narrowly as strictly web-based for which clients need nothing but a web browser. But that definition doesn’t cover things like Skype and SETI@home.  And just because PFF has implemented Outlook Web Access so we can access the Exchange server via the Web, doesn’t necessarily mean we’ve implemented what most people might think of as “cloud computing.”  Yet these are all variations on a common theme, which leads me to propose my own basic definition: any client/server system that operates over the Internet.

To understand the potential policy and legal issues raised by cloud computing so-defined, one must break down the discussion into a 4-part grid.  One axis is divided into private data ( e.g., email) and public data (e.g., photo sharing).  The other axis is divided into data hosted on a single server or centralized server farm and data hosted on multiple computers in a dynamic peer-to-peer network (e.g., BitTorrent file sharing).

Examples User Data is Public User Data is Private
Centralized Server(s) Blogs Discussion boards Flickr Web-based email servers Windows Terminal Services
Peer-to-Peer BitTorrent FreeNet (article) Skype Wuala

There are also a great number of peer-to-peer cloud computing projects that don’t require the sharing of user data.  SETI@home may be the most well-known example:  When the Search for Extra-Terrestrial Intelligence (SETI) project lost its funding and could no longer afford the massive servers it used to process the data from its radiotelescopes, it realized that it could distribute the work to Internet users in the form of a screensaver (thus the SETI work would only be done when a user’s computer was idle).

It is encouraging to see that Congress is no longer considering simply outlawing cloud computing (which used to be called distributed computing), but if there is to be an intelligible debate about policy responses to cloud computing, we must define our terms and realize that policies beneficial to some forms of cloud computing may complicate-sometimes fatally, in business terms-other forms.  For example, regulations imposed on companies storing users’ personal data may stymie peer-to-peer backup applications like Wuala, which distributes each user’s backup data to other users, but uses encryption to prevent users from accessing the data they’re storing for others. Wuala might be forced to close down if regulations requiring companies to keep records for a set period of time or follow separate procedures for minors were interpreted to apply to each Wuala user.

As Georgetown CCT professor Mike Nelson explained at the Google workshop, technology generally follows a clear evolution in the following steps: from hardware to software to people to organizations to policy.  It’s taken a long time to educate lawmakers about the Internet.  Today’s panelists all seemed to agree that cloud computing could be “the next big thing.”  That necessarily means that the education process for lawmakers needs to start all over again, explaining the ways in which cloud computing is similar to prior technologies, the ways it’s different, and the salient differences among the four broad categories of cloud computing described above.  Until that’s done, any talk of legislation in this area is simply premature.

]]>
https://techliberation.com/2008/09/12/still-cloudy-on-cloud-computing-a-matrix-to-guide-the-coming-policy-debates/feed/ 0 12701