Articles by Adam Marcus

Adam Marcus was Chief Operating Officer for TechFreedom and a Research Fellow & Senior Technologist at The Progress & Freedom Foundation. Prior to PFF, he worked as a technical writer for Citrix Systems, Inc., the Centers for Disease Control and Prevention, and the Department of Transportation. He has also interned at the California Public Utilities Commission and the Cato Institute and provided technical consulting to a number of non-profits. Marcus received his law degree from Santa Clara University, his MA in Communications, Culture & Technology from Georgetown University, and his BA in English from the University of Florida. In his spare time he re-flashes the firmware on every device he can, re-wires his home media center, and re-programs his universal remote control.


Firefox logoAs noted in the first installment of our “Privacy Solution Series,” we are outlining various user-empowerment or user “self-help” tools that allow Internet users to better protect their privacy online-and especially to defeat tracking for online behavioral advertising purposes. These tools and methods form an important part of a layered approach that we believe offers an effective alternative to government-mandated regulation of online privacy.

In the last installment, we covered the privacy features embedded in Microsoft’s Internet Explorer (IE) 8. This installment explores the privacy features in the Mozilla Foundation’s Firefox 3, both the current 3.0.7 version and the second beta for the next release, 3.5 (NOTE – The name for the next version of Firefox was just changed from 3.1 to 3.5 to reflect the large number of changes, but the beta is still named 3.1 Beta 2). We’ll make it clear which features are new to 3.1/3.5 and those which are shared with 3.0.7. Future installments will cover Google’s Chrome 1.0, Apple’s Safari 4, and some of the more useful privacy plug-ins for browsers . The availability and popularity of privacy plug-ins for Firefox such as AdBlock (which we discussed here), NoScript and Tor significantly augments the privacy management capabilities of Firefox beyond the capability currently baked into the browser.  In evaluating the Web browsers, we examine:

(1) cookie management;
(2) private browsing; and
(3) other privacy features

Continue reading →

By Adam Thierer, Berin Szoka, & Adam Marcus

IE logoAs noted in the first installment of our “Privacy Solution Series,” we are outlining various user-empowerment or user “self-help” tools that allow Internet users to better protect their privacy online-and especially to defeat tracking for online behavioral advertising purposes.  These tools and methods form an important part of a layered approach that we believe offers an effective alternative to government-mandated regulation of online privacy.

In some of the upcoming installments we will be exploring the privacy controls embedded in the major web browsers consumers use today: Microsoft’s Internet Explorer (IE) 8, the Mozilla Foundation’s Firefox 3, Google’s Chrome 1.0, and Apple’s Safari 4. In evaluating these browsers, we will examine three types of privacy features:

(1) cookie management controls;
(2) private browsing; and
(3) other privacy features

We will first be focusing on the default features and functions embedded in the browsers. We plan to do subsequent installments on the various downloadable “add-ons” available for browsers, as we already did for AdBlock Plus in the second installment of this series. Continue reading →

This is the third in a series of articles about Internet technologies. The first article was about web cookies. The second article explained the network neutrality debate. This article explains network management systems. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed.

There has been lots of talk on blogs recently about Cox Communications’ network management trial. Some see this as another nail in Network Neutrality’s coffin, while many users are just hoping for anything that will make their network connection faster.

As I explained previously, the Network Neutrality debate is best understood as a debate about how to best manage traffic on the Internet.

Those who advocate for network neutrality are actually advocating for legislation that would set strict rules for how ISPs manage traffic. They essentially want to re-classify ISPs as common carriers. Those on the other side of the debate believe that the government is unable to set rules for something that changes as rapidly as the Internet. They want ISPs to have complete freedom to experiment with different business models and believe that anything that approaches real discrimination will be swiftly dealt with by market forces.

But what both sides seem to ignore is that traffic must be managed. Even if every connection and router on the Internet is built to carry ten times the expected capacity, there will be occasional outages. It is foolish to believe that routers will never become overburdened–they already do. Current routers already have a system for prioritizing packets when they get overburdened; they just drop all packets received after their buffers are full. This system is fair, but it’s not optimized.

The network neutrality debate needs to shift to a debate on what should be prioritized and how. One way packets can be prioritized is by the type of data they’re carrying. Applications that require low latency would be prioritized and those that don’t require low latency would not be prioritized.

Continue reading →

As a means of introducing myself to TLF readers, this is an article that I wrote for the PFF blog in September that has not been previously mentioned on the TLF. Most of my other PFF blog posts have been cross-posted by Adam Thierer or Berin Szoka, but I’ve taken ownership of those posts so they appear on my TLF author page.

This is the first in a series of articles that will focus directly on technology instead of technology policy. With an average age of 57, most members of Congress were at least 30 when the IBM PC was introduced in 1981. So it is not surprising that lawmakers have difficulty with cutting-edge technology. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed, but no insult to the reader’s intelligence is intended.

This article focuses on cookies–not the cookies you eat, but the cookies associated with browsing the World Wide Web. There has been public concern over the privacy implications of cookies since they were first developed. But to understand them , you must know a bit of history.

According to Tim Berners Lee, the creator of the World Wide Web, “[g]etting people to put data on the Web often was a question of getting them to change perspective, from thinking of the user’s access to it not as interaction with, say, an online library system, but as navigation th[r]ough a set of virtual pages in some abstract space. In this concept, users could bookmark any place and return to it, and could make links into any place from another document. This would give a feeling of persistence, of an ongoing existence, to each page.”[1. Tim Berners-Lee, Weaving The Web: The Original Design and Ultimate Destiny of the World Wide Web. p. 37. Harper Business (2000).] The Web has changed quite a bit since the early 1990s.

Today, websites are much more dynamic and interactive, with every page being customized for each user. Such customization could include automatically selecting the appropriate language for the user based on where they’re located, displaying only content that has been added since the last time the user visited the site, remembering a user who wants to stay logged into a site from a particular computer, or keeping track of items in a virtual shopping cart. These features are simply not possible without the ability for a website to distinguish one user from another and to remember a user as they navigate from one page to another. Today, in the Web 2.0 era, instead of Web pages having persistence (as Berners-Lee described), we have dynamic pages and “user-persistence.”

This paper describes the various methods websites can use to enable user-persistence and how this affects user privacy. But the first thing the reader must realize is that the Web was not initially designed to be interactive; indeed, as the quote above shows, the goal was the exact opposite. Yet interactivity is critical to many of the things we all take for granted about web content and services today.

Continue reading →

The introduction below was originally written by Adam Thierer, but now that I (Adam Marcus) am a full-fledged TLF member, I have taken authorship.
___________________________________________________

My PFF colleague Bret Swanson had a nice post here yesterday talking about the evolution of the debate over edge caching and network management (“Bandwidth, Storewidth, and Net Neutrality“), but I also wanted to draw your attention to related essay by another PFF colleague of mine. Adam Marcus, who serves as a Research Fellow and Senior Technologist at PFF, has started a wonderful series of “Nuts & Bolts” essays meant to “provide a solid technical foundation for the policy debates that new technologies often trigger.” His latest essay is on Network neutrality and edge caching, which has been the topic of heated discussion since the Wall Street Journal’s front-page story on Monday that Google had approached major cable and phone companies and supposedly proposed to create a fast lane for its own content.

Anyway, Adam Marcus gave me permission to reprint the article in its entirety down below. I hope you find this background information useful.
___________________________________________________

Nuts and Bolts: Network neutrality and edge caching

by Adam Marcus, Progress & Freedom Foundation

December 17, 2008

This is the second in a series of articles about Internet technologies. The first article was about web cookies. This article explains the network neutrality debate. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed.

To understand the network neutrality debate, you must first understand bandwidth and latency. There are lots of analogies equating the Internet to roadways, but it’s because the analogies are quite instructive. For example, if one or two people need to travel across town, a fast sports car is probably the fastest method. But if 50 people need to travel across town, it may require 25 trips in a single sports car. So a bus which can transport all 50 people in a single trip may be “faster” overall. The sports car is faster, but the bus has more capacity. Bandwidth is a measure of capacity, of how much data can be transmitted in a fixed period of time. It is usually measured in Megabits per second (Mbps). Latency is a measure of speed, of the time it takes a single packet data to travel between two points. It is usually measured in milliseconds. The “speeds” that ISPs advertise have nothing to do with latency; they’re actually referring to bandwidth. ISPs don’t advertise latency because its different for each different site you’re trying to reach.
Continue reading →

The introduction below was originally written by Berin Szoka, but now that I (Adam Marcus) am a full-fledged TLF member, I have taken authorship.


Adam Marcus, our exceptionally tech-savvy new research assistant at PFF, has published his first piece at the PFF blog, which I reprint here for your edification.

Today Google’s DC office hosted an interesting panel on cloud computing.  What was missing was a good definition of what “cloud computing” actually is.

While Wikipedia has its own broad definition of cloud computing, many think of cloud computing more narrowly as strictly web-based for which clients need nothing but a web browser. But that definition doesn’t cover things like Skype and SETI@home.  And just because PFF has implemented Outlook Web Access so we can access the Exchange server via the Web, doesn’t necessarily mean we’ve implemented what most people might think of as “cloud computing.”  Yet these are all variations on a common theme, which leads me to propose my own basic definition: any client/server system that operates over the Internet.

To understand the potential policy and legal issues raised by cloud computing so-defined, one must break down the discussion into a 4-part grid.  One axis is divided into private data (e.g., email) and public data (e.g., photo sharing).  The other axis is divided into data hosted on a single server or centralized server farm and data hosted on multiple computers in a dynamic peer-to-peer network (e.g., BitTorrent file sharing).

Examples User Data is Public User Data is Private
Centralized Server(s) Blogs
Discussion boards
Flickr
Web-based email servers
Windows Terminal Services
Peer-to-Peer BitTorrent
FreeNet (article)
Skype
Wuala

Continue reading →