Eli Dourado, a research fellow at the Mercatus Center at George Mason University, discusses malware and possible ways to deal with it. Dourado notes several shortcomings of a government response including the fact that the people who create malware come from many different countries some of which would not be compliant with the US or other countries seeking to punish a malware author. Introducing indirect liability for ISPs whose users spread malware, as some suggest, is not necessary, according to Dourado. Service providers have already developed informal institutions on the Internet to deal with the problem. These real informal systems are more efficient than a hypothetical liability regime, Dourado argues.
We live in an entitlement era, when rights are seemingly invented out of whole-cloth. It should come as no surprise, therefore, that a bit of “rights inflation” is creeping into debates about Internet policy. Today, for example, a coalition of groups and individuals (many of which typically advocate greater government activism), have floated a “Declaration of Internet Freedom.” My concern with their brief manifesto is that is seems to based on a confused interpretation of the word “freedom,” which many of the groups behind the effort take to mean freedom for the government to reorder the affairs of cyberspace to achieve values they hold dear.
The manifesto begins with the assertion that “We stand for a free and open Internet,” and then says “We support transparent and participatory processes for making Internet policy and the establishment of five basic principles:”
Expression: Don’t censor the Internet.
Access: Promote universal access to fast and affordable networks.
Openness: Keep the Internet an open network where everyone is free to connect, communicate, write, read, watch, speak, listen, learn, create and innovate.
Innovation: Protect the freedom to innovate and create without permission. Don’t block new technologies, and don’t punish innovators for their users actions.
Privacy: Protect privacy and defend everyone’s ability to control how their data and devices are used.
This effort follows close on the heels of a proposal from Rep. Darrell Issa (R-CA) and Sen. Ron Wyden (D-OR) to craft a “Digital Bill of Rights” that, not to be outdone, includes ten principles. They are: Continue reading →
John Palfrey of the Berkmann Center at Harvard Law School, discusses his new book written with Urs Gasser, Interop: The Promise and Perils of Highly Interconnected Systems. Interoperability is a term used to describe the standardization and integration of technology. Palfrey discusses how the term can describe many relationships in the world and that it doesn’t have to be limited to technical systems. He also describes potential pitfalls of too much interoperability. Palfrey finds that greater levels of interoperability can lead to greater competition, collaboration, and the development of standards. It can also lead to giving less protection to privacy and security. The trick is to get to the right level of interoperability. If systems become too complex, then nobody can understand them and they can become unstable. Palfrey describes the current financial crises could be an example of this. Palfrey also describes the difficulty in finding the proper role of government in encouraging or discouraging interoperability.
Keen is on solid ground when outlining the many downsides of over-sharing, beginning with the privacy and reputational consequences for each of us. “Social media is the confessional novel that we are not only all writing but also collectively publishing for everyone else to read,” he says. That can be a problem because the Internet has a very long memory. A youngster’s silly pranks or soul-searching self-revelations may seem like a fun thing to upload when such juvenile antics or angst will win praise (and plenty of pageviews) from teen peers. Your 34-year-old self, however, will likely have a very different view of that same rant, picture, or video. Yet, that content will likely still be around for the world to see when you do reach adulthood.
And Keen offers many other reasons why we should be concerned about a world of over-sharing and “hypervisibility.” The problem is that Keen drowns out these valid concerns by assaulting the reader with layers of over-the-top pessimistic prognostications and apocalyptic rhetoric. In particular, again and again and again in the book he comes back to George Orwell and his dystopian novel, 1984. Keen insists that some sort of Orwellian catastrophe is set to befall humanity because of social media over-sharing. (See this other Forbes column on Keen’s book, “Why 1984 Is Upon Us,” to see just how far this theme can be pushed). Continue reading →
(Adapted from Bloomberg BNA Daily Report for Executives, May 16th, 2012.)
Two years ago, the Federal Communications Commission’s National Broadband Plan raised alarms about the future of mobile broadband. Given unprecedented increases in consumer demand for new devices and new services, the agency said, network operators would need far more radio frequency assigned to them, and soon. Without additional spectrum, the report noted ominously, mobile networks could grind to a halt, hitting a wall as soon as 2015.
That’s one reason President Obama used last year’s State of the Union address to renew calls for the FCC and the National Telecommunications and Information Administration (NTIA) to take bold action, and to do so quickly. The White House, after all, had set an ambitious goal of making mobile broadband available to 98 percent of all Americans by 2016. To support that objective, the president told the agencies to identify quickly an additional 500 MHz of spectrum for mobile networks.
By auctioning that spectrum to network operators, the president noted, the deficit could be reduced by nearly $10 billion. That way, the Internet economy could not only be accelerated, but taxpayers would actually save money in the process.
A good plan. So how is it working out?
Unfortunately, the short answer is: Not well. Speaking this week at the annual meeting of the mobile trade group CTIA, FCC Chairman Julius Genachowski had to acknowledge the sad truth: “the overall amount of spectrum available has not changed, except for steps we’re taking to
add new spectrum on the market.” Continue reading →
On
Fierce Mobile IT, I’ve posted a detailed analysis of the NTIA’s recent report on government spectrum holdings in the 1755-1850 MHz. range and the possibility of freeing up some or all of it for mobile broadband users.
The report follows from a 2010 White House directive issued shortly after the FCC’s National Broadband Plan was published, in which the FCC raised the alarm of an imminent “spectrum crunch” for mobile users.
By the FCC’s estimates, mobile broadband will need an additional 300 MHz. of spectrum by 2015 and 500 MHz. by 2020, in order to satisfy increases in demand that have only amped up since the report was issued. So far, only a small amount of additional spectrum has been allocated. Increasingly, the FCC appears rudderless in efforts to supply the rest, and to do so in time. Continue reading →
Heritage Foundation released a new study this week arguing that “Congress Should Not Authorize States to Expand Collection of Taxes on Internet and Mail Order Sales.” It’s a good contribution to the ongoing debate over Internet tax policy. In the paper, David S. Addington, the Vice President for Domestic and Economic Policy at Heritage, takes a close look at the constitutional considerations in play in this debate. Specifically, he examines the wisdom of S. 1832, “The Marketplace Fairness Act.” Addington argues that, “enactment of S. 1832 would discourage free market competition” and raise a host of other issues:
The Constitution of the United States has set the legal baseline—the level playing field—around which the American free-market economy has built itself. The Constitution, as reflected in the Quill decision, is the source of the present arrangement regarding collection of state sales and use taxes by remote sellers. Ever since the Supreme Court decided Quill in 1992, American businesses have made millions of business decisions in the competitive marketplace based in part on settled expectations regarding state taxation affecting their sales transactions. The states and businesses advocating S. 1832 seek to change the current, constitutionally prescribed playing field. They seek to use governmental power to intervene in the economy to help in-state, store-based businesses by imposing a new tax-collection burden on out-of-state competitors who sell over the Internet, through mail order catalogs, or by telephone. Free-market principles generally discourage such government intervention in the economy to pick winners and losers based on legislative policy preferences.
Veronique de Rugy and I raised similar concerns in both a recent Mercatus white paper (“The Internet, Sales Taxes, and Tax Competition
“) and an earlier 2003 Cato white paper, (“The Internet Tax Solution: Tax Competition, Not Tax Collusion”). We argued that there are better ways to achieve “tax fairness” without sacrificing tax competition or opening the doors to unjust, unconstitutional, and burdensome state-based taxation of interstate sales. Specifically, we point out that an “origin-based” sourcing rule would be the cleanest, most pro-constitutional, and pro-competitive alternative. I also discussed these issues at a recent Cato event. [Video follows.]
Andrew Orlowski of The Register (U.K.) recently posted a very interesting essay making the case for treating online copyright and privacy as essentially the same problem in need of the same solution: increased property rights. In his essay (“‘Don’t break the internet’: How an idiot’s slogan stole your privacy“), he argues that, “The absence of permissions on our personal data and the absence of permissions on digital copyright objects are two sides of the same coin. Economically and legally they’re an absence of property rights – and an insistence on preserving the internet as a childlike, utopian world, where nobody owns anything, or ever turns a request down. But as we’ve seen, you can build things like libraries with permissions too – and create new markets.” He argues that “no matter what law you pass, it won’t work unless there’s ownership attached to data, and you, as the individual, are the ultimate owner. From the basis of ownership, we can then agree what kind of rights are associated with the data – eg, the right to exclude people from it, the right to sell it or exchange it – and then build a permission-based world on top of that.”
And so, he concludes, we should set aside concerns about Internet regulation and information control and get down to the business of engineering solutions that would help us property-tize both intangible creations and intangible facts about ourselves to better shield our intellectual creations and our privacy in the information age. He builds on the thoughts of Mark Bide, a tech consultant:
For Bide, privacy and content markets are just a technical challenges that need to be addressed intelligently.”You can take two views,” he told me. “One is that every piece of information flowing around a network is a good thing, and we should know everything about everybody, and have no constraints on access to it all.” People who believe this, he added, tend to be inflexible – there is no half-way house. “The alternative view is that we can take the technology to make privacy and intellectual property work on the network. The function of copyright is to allow creators and people who invest in creation to define how it can be used. That’s the purpose of it. “So which way do we want to do it?” he asks. “Do we want to throw up our hands and do nothing? The workings of a civilised society need both privacy and creator’s rights.” But this a new way of thinking about things: it will be met with cognitive dissonance. Copyright activists who fight property rights on the internet and have never seen a copyright law they like, generally do like their privacy. They want to preserve it, and will support laws that do. But to succeed, they’ll need to argue for stronger property rights. They have yet to realise that their opponents in the copyright wars have been arguing for those too, for years. Both sides of the copyright “fight” actually need the same thing. This is odd, I said to Bide. How can he account for this irony? “Ah,” says Bide. “Privacy and copyright are two things nobody cares about unless it’s their own privacy, and their own copyright.”
These are important insights that get at a fundamental truth that all too many people ignore today: At root,
most information control efforts are related and solutions for one problem can often be used to address others. But there’s another insight that Orlowski ignores: Whether we are discussing copyright, privacy, online speech and child safety, or cybersecurity, all these efforts to control the free flow of digitized bits over decentralized global networks will be increasingly complex, costly, and riddled with myriad unintended consequences. Importantly, that is true whether you seek to control information flows through top-down administrative regulation or by assigning and enforcing property rights in intellectual creations or private information.
Let me elaborate a bit (and I apologize for the rambling mess of rant that follows).
After first granting and then, a year later, revoking LightSquared’s waiver to repurpose its satellite spectrum, the agency has taken a more conservative (albeit slower) course with Dish. Yesterday, the agency initiated a Notice of Proposed Rulemaking that would, if adopted, assign flexible use rights to about 40 Mhz. of MSS spectrum licensed to Dish.
Current allocations of spectrum have little to do with the technical characteristics of different bands. That existing licenses limit Dish and LightSquared to satellite applications, for example, is simply an artifact of more-or-less random carve-outs to the absurdly complicated spectrum map managed by the agency since 1934. Advances in technology makes it possible to successfully use many different bands for many different purposes.
But the legacy of the FCC’s command-and-control model for allocations to favor “new” services (new, that is, until they are made obsolete in later years or decades) and shape competition to its changing whims is a confusing and unnecessary pile-up of limitations and conditions that severely and artificially limit the ways in which spectrum can be redeployed as technology and consumer demands change. Today, the FCC sits squarely in the middle of each of over 50,000 licenses, a huge bottleneck that is making the imminent spectrum crisis in mobile broadband even worse. Continue reading →
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →