Open Source, Open Standards & Peer Production – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Fri, 15 Sep 2023 14:40:33 +0000 en-US hourly 1 6772528 Event Video: Debating Frontier AI Regulation https://techliberation.com/2023/09/15/event-video-debating-frontier-ai-regulation/ https://techliberation.com/2023/09/15/event-video-debating-frontier-ai-regulation/#comments Fri, 15 Sep 2023 14:39:59 +0000 https://techliberation.com/?p=77157

The Brookings Institution hosted this excellent event on frontier AI regulation this week featuring a panel discussion I was on that followed opening remarks from Rep. Ted Lieu (D-CA). I come in around the 51-min mark of the event video and explain why I worry that AI policy now threatens to devolve into an all-out war on computation and open source innovation in particular. ​

I argue that some pundits and policymakers appear to be on the way to substituting a very real existential risk (authoritarian govt control over computation/science) for a hypothetic existential risk of powerful AGI. I explain how there are better, less destructive ways to address frontier AI concerns than the highly repressive approaches currently being considered.

I have developed these themes and arguments at much greater length in a series of essays over on Medium over the past few months. If you care to read more, the four key articles to begin with are:

In June, I also released this longer R Street Institute report on “Existential Risks & Global Governance Issues around AI & Robotics,” and then spent an hour talking about these issues on the TechPolicyPodcast about “Who’s Afraid of Artificial Intelligence?” All of my past writing and speaking on AI, ML, and robotic policy can be found here, and that list is update every month.

As always, I’ll have much more to say on this topic as the war on computation expands. This is quickly becoming the most epic technology policy battle of modern times.

]]>
https://techliberation.com/2023/09/15/event-video-debating-frontier-ai-regulation/feed/ 11 77157
On Doctorow’s “Adversarial Interoperability” https://techliberation.com/2020/08/29/on-doctorows-adversarial-interoperability/ https://techliberation.com/2020/08/29/on-doctorows-adversarial-interoperability/#comments Sat, 29 Aug 2020 19:15:25 +0000 https://techliberation.com/?p=76805

Interoperability is a topic that has long been of interest to me. How networks, platforms, and devices work with each other–or sometimes fail to–is an important engineering, business, and policy issue. Back in 2012, I spilled out over 5,000 words on the topic when reviewing John Palfrey and Urs Gasser’s excellent book, Interop: The Promise and Perils of Highly Interconnected Systems.

I’ve always struggled with the interoperability issues, however, and often avoided them became of the sheer complexity of it all. Some interesting recent essays by sci-fi author and digital activist Cory Doctorow remind me that I need to get back on top of the issue. His latest essay is a call-to-arms in favor of what he calls “adversarial interoperability.” “[T]hat’s when you create a new product or service that plugs into the existing ones without the permission of the companies that make them,” he says. “Think of third-party printer ink, alternative app stores, or independent repair shops that use compatible parts from rival manufacturers to fix your car or your phone or your tractor.”

Doctorow is a vociferous defender of expanded digital access rights of many flavors and his latest essays on interoperability expand upon his previous advocacy for open access and a general freedom to tinker. He does much of this work with the Electronic Frontier Foundation (EFF), which shares his commitment to expanded digital access and interoperability rights in various contexts.

I’m in league with Doctorow and EFF on some of these things, but also find myself thinking they go much too far in other ways. At root, their work and advocacy raise a profound question: should there be any general right to exclude on digital platforms? Although he doesn’t always come right out and say it, Doctorow’s work often seems like an outright rejection of any sort of property rights in networks or platforms. Generally speaking, he does not want the law to recognize any right for tech platforms to exclude using digital fences of any sort.

Where to Draw the Lines?

As someone who has authored a book about the importance of permissionless innovation, I need to be able to answer questions about where these lines between open versus closed systems are drawn. Definitions and framing matter, however. I use “permissionless innovation” as a descriptor for one possible policy disposition when considering where legal and regulatory defaults should be set. Another conception of permissionless innovation is more of an engineering ideal; a general freedom to connect, tinker, modify, etc. (I speak more about these conceptions in my latest book, Evasive Entrepreneurs.) Of course, someone advocating permissionless innovation as a policy default will sometimes be confronted with the question of what the law should say when someone behaves in an “evasive” fashion in the latter conception of permissionless innovation.

Doctorow would generally answer that question by saying that law should not be rigged to favor exclusion through laws like the DMCA (and specifically the law’s anti- circumvention provisions), Computer Fraud and Abuse Act, patent law, and various other rules and laws. “[T]he current crop of Big Tech companies has secured laws, regulations, and court decisions that have dramatically restricted adversarial interoperability.”

Generally speaking, I agree. I’m not a fan of technocratic laws or regulations that seek to micro-manage interoperability and which stack the deck in favor of exclusionary conduct with steep penalties for evasion. But does that mean adversarial interoperability should be permitted in all cases? Should there exist any sort of common law presumption one way or the other when a user or competitor seeks access to an existing private platform or device?

Specifics matter here and I don’t have time to get into all the case studies that Doctorow goes through. Some are no-brainers, like the infamous Lexmark case involving refillable printer ink cartridges. Other cases are far more complicated, at least for me. Does Epic, creator of Fortnite, have a right of adversarial interoperability that it can exercise against Apple and their AppStore? As Dirk Auer suggests in a new essay, this episode looks more like a straightforward pricing dispute. Epic is making it out to be much more than that, suggesting Apple is guilty of unfair and exclusionary practices that require a legal remedy.

Why not take that logic further and just say Apple’s App Store us tantamount to a natural monopoly or digital essential facility that Epic and everyone else is entitled to on whatever terms they want? For that matter, why not apply the same logic to Epic’s Fortnite platform or even its Unreal Engine? Does every other gaming developer have a right to piggyback on the juggernaut that Epic has built?

This gets to the core question about Doctorow’s concept of adversarial interoperability: Exactly what should common law and the courts say platform owners make access rights a simple pricing matter and say: “You pay or you are out.” Like Doctorow and EFF, I don’t want Apple to benefit from any special favors from laws like DMCA. Where we differ is that I would still leave the door open for Apple to exercise various other common law contractual rights or property rights in court.

I suspect Doctorow would deny any such claims by Apple or anyone else. If so, I would like to see him spell out in more precise terms exactly what Apple’s property rights and contractual rights are in this instance. Or, again, should we just treat the App Store as a digital commons with unfettered open access rights for developers? If so, would Apple be required to still manage the resource once it is a quasi-commons?

I think that would end miserably, but would like to hear Doctorow’s preferred approach before saying more. I suspect a lot rides on the distinction between “open” verses “proprietary” standards, but compared to Doctorow and EFF, I am willing to embrace a world of both open and proprietary systems, and many hybrids in between. I don’t want the law favoring one type over the other, but that means I need to endorse a generalized property right for digital operators such that they can still exclude others (even in the absence of artificial regulatory rights like DMCA creates). Again, I suspect Doctorow would reject that standard, preferring a generalized right of access, even if that means the platforms become de facto commons.

More Radical Steps

Elsewhere, Doctorow has said is that some of these questions would be better addressed through more aggressive antitrust regulation. Mere data portability or mandatory interoperability isn’t enough for him. “Data portability is important,” Doctorow says, “but it is no substitute for the ability to have ongoing access to a service that you’re in the process of migrating away from.”

In his latest online book on “How to Destroy Surveillance Capitalism,” Doctorow suggests that it is time to “make Big Tech small again” through an “anti-monopoly ecology movement.” That “means bans on mergers between large companies, on big companies acquiring nascent competitors, and on platform companies competing directly with the companies that rely on the platforms.” And he desires a host of other remedies.

So, here we have the convergence of interoperability policy and antitrust policy, with a layer of property confiscation layered on top apparently. “Now it’s up to us to seize the means of computation, putting that electronic nervous system under democratic, accountable control,” he insists in his latest manifesto.

What’s funny about this is that Doctorow begins most of his essays by pointing out all the ways that politics is the problem when it comes to access issues, only to end by suggesting that a lot more political meddling is the required solution. He repeatedly laments how large tech players have so often been able to convince lawmakers and regulators to pass special laws or regulations that work to their favor. Yet, in his We-Can-Build-A-Better-Bureaucrat model of things, all those old problems will apparently disappear when we get the right people in power and get rid of those nefarious capitalist schemers.

Thus, what really animates Doctorow’s advocacy for adversarial interoperability is a deep suspicion of free market capitalism and property rights in particular. In this worldview, interoperability really just becomes a Trojan Horse meant to help bring down the entire capitalist order. Am I exaggerating? “As to why things are so screwed up? Capitalism.” Those are his exact words from the conclusion of his latest book.

Adversarial Innovation & Evolutionary Interop

Still, Doctorow raises many legitimate issues about interconnection and digital access rights. But we need a better approach to work though these questions than the one he suggests.

In my lengthy review of the Palfrey and Gasser Interop book, I tried to sketch out an alternative framework for thinking seriously about these issues. I referred to my preferred approach as “experimental interoperability” or “evolutionary interoperability.” I described this as the theory that ongoing marketplace experimentation with technical standards, modes of information production and dissemination, and interoperable information systems, is almost always preferable to the artificial foreclosure of this dynamic process through state action. The former allows for better learning and coping mechanisms to develop while also incentivizing the spontaneous, natural evolution of the market and market responses.

Adversarial interoperability is important, but not nearly as important as adversarial innovation and facilities-based competition. Stated differently, access rights to existing systems is an important value, but the incentives we have in place to encourage entirely new systems is what really matters most. At some point, a generalized right of access to existing systems discourages the sort of platform-building that could help give rise to the sort of creative destruction we have seen at work repeatedly in the past and that we still need today. Taken too far, adversarial interoperability threatens to undermine this goal. Why seek to build a better alternative platform if you can just endlessly free ride off someone else’s by force of law?

Thus, I prefer to work at the margins and think through how to balance these competing claims of access / interoperability rights versus contractual / property rights. My take will be too utilitarian for not only Doctorow but also for some libertarians, who want clear answers to all these questions based upon their preferred natural law-oriented constructions of rights. The problem with that approach is that it leads to all-or-nothing extremes (complete digital property rights, or virtually none) and that approach is fundamentally unworkable and destructive. We need to work harder about how to balance these rights and values in pro-competitive, pro-innovation fashion.

There is No Such Thing as Optimal Interoperability

In sum, there is no such thing as “optimal interoperablity.” Sometimes proprietary or “closed” systems will offer the public features and options that they will find preferable to “open” ones.  “There are many reasons why consumers might prefer ‘closed’ systems – even when they have to pay a premium for them,” argues Dirk Auer in a separate essay. It could be greater convenience, security, or other things. Palfrey and Gasser correctly noted in their book that, “the state is rarely in a position to call a winner among competing technologies” (p. 174). Moreover, they concluded:

“Lawmakers need to keep in view the limits of their own effectiveness when it comes to accomplishing optimal levels of interoperability. Case studies of government intervention, especially where complex information technologies are involved, show that states tend to be ill suited to determine on their own what specific technology will be the best option for the future (p. 175)

A thousand amens to that! The law should not artificially foreclose experimentation with many different types of platforms, standards, devices and the interoperability that exists among them.

]]>
https://techliberation.com/2020/08/29/on-doctorows-adversarial-interoperability/feed/ 3 76805
The FCC Targets Cable Set-Top Boxes—Why Now? https://techliberation.com/2016/02/02/the-fcc-targets-cable-set-top-boxes-why-now/ https://techliberation.com/2016/02/02/the-fcc-targets-cable-set-top-boxes-why-now/#comments Tue, 02 Feb 2016 20:33:14 +0000 http://techliberation.com/?p=75983

With great fanfare, FCC Chairman Thomas Wheeler is calling for sweeping changes to the way cable TV set-top boxes work.

In an essay published Jan. 27 by Re/Code, Wheeler began by citing the high prices consumers pay for set-top box rentals, and bemoans the fact that alternatives are not easily available. Yet for all the talk and tweets about pricing and consumer lock-in, Wheeler did not propose an inquiry into set-top box profit margins, nor whether the supply chain is unduly controlled by the cable companies. Neither did Wheeler propose an investigation into the complaints consumers have made about cable companies’ hassles around CableCards, which under FCC mandate cable companies must provide to customers who buy their own set-top boxes.

In fact, he dropped the pricing issue halfway through and began discussing access to streaming content:

To receive streaming Internet video, it is necessary to have a smart TV, or to watch it on a tablet or laptop computer that, similarly, do not have access to the channels and content that pay-TV subscribers pay for. The result is multiple devices and controllers, constrained program choice and higher costs.

This statement seems intentionally misleading. Roku, Apple TV and Amazon Fire sell boxes that connect to TVs and allow a huge amount of streaming content to play. True, the devices are still independent of the set-top cable box but there is no evidence that this lack of integration is a competitive barrier.

A new generation of devices, called media home gateways (MHGs), is poised to provide this integration, as well as manage other media-based cloud services on behalf of consumers. This is where Wheeler’s proposal should be worrisome. He writes:

The new rules would create a framework for providing device manufacturers, software developers and others the information they need to introduce innovative new technologies, while at the same time maintaining strong security, copyright and consumer protections.

This sounds much more like a plan to dictate operating systems, user interfaces and other hardware and software standards for equipment that until now has been unregulated. Wheeler gives no explanation as to how his proposal will lead to lower prices or development of a direct-to-consumer sales channel.

[M]y proposal will pave the way for a competitive marketplace for alternate navigation devices, and could even end the need for multiple remote controls, allowing you to use one for all of the video sources you use.

What Wheeler really wants is FCC management of the transition from today’s set-top boxes to the media home gateways (MHGs) just beginning to appear on the market—a foray into customer premises equipment regulation unseen since the 1960s.

For good reason, the words “media home gateway” never appear in Wheeler’s Re/Code article. By avoiding mention of MHGs, he can play his “lack of competition” card, as he did in Thursday’s press briefing on his proposal.

There’s more than a whiff of misdirection here. Set-top boxes are a maturing market. An October 2015 TechNavio report forecasts the shipment volume of the global set-top box market to decline at a compound annual rate of 1.34 % over the period 2014-2019. By revenue, the market is expected to decline at a compound annual rate 1.36% during the forecast period. When consumers “cut the cable cord,” as some 21 million have, it’s set-top boxes that get unplugged.

At the same time, TechNavio forecasts the global MHG market to grow at a compound annual rate of 7.82% over the same period. Elsewhere, SNL Kagan’s Multimedia Research Group forecasts MHG shipments will exceed 24 million in 2017, up from 7.7 million in 2012. The long list of MHG manufacturers includes ActionTec, Arris, Ceva, Huawei, Humax, Samsung and Technicolor.

MHGs are the “alternative navigation devices” Wheeler coyly refers to in his Re/Code essay. These devices will replace the set-top boxes in use today, but because of their ability to handle Internet streaming, they are likely to be available through more than one channel. That’s why they only way to view Wheeler’s call to “unlock the set-top box” is as a pre-emptive move to extend the FCC’s regulation into the delivery of streaming media.

To be sure, if the FCC mandates integration of streaming options into cable-provided MHGs, streaming companies would gain stronger foothold into consumers’ homes, which would then allow them to share their apps, gather data on users, and, perhaps most lucratively of all, control the interface on which channels are displayed, as noted by The Verge’s Ashley Carman.

Yet the streaming companies that would appear to benefit most from this proposal have thus far been quiet. Perhaps because Wheeler has made no secret that he believes Apple TV, Amazon Fire and Roku are multichannel video programming distributors (MVPDs), FCC-speak for “local cable companies.” Is his “unlock the box” plan precisely the opposite? Is it an effort to fold streaming aggregators into the existing cable TV regulatory platform, with all its myriad rules, regulations, legal obligations and—dare we say it—fees and surcharges? You might roll your eyes, but this is the only analysis in which the proposal, which focuses on “device manufacturers, software developers and others,” makes sense.

But does the FCC have the right to require cable companies to share customer data acquired through the infrastructure and software they built and own? It’s yet another iteration of the old unbundled network elements model that is consistently shot down by the courts yet one that the FCC can’t seem to get past.

Arcane details aside, the FCC should not be involved in directing evolution paths, operating software or other product features. It creates too much opportunity for lobbying and rent-seeking. History shows that when the government gets granularly involved in promoting technology direction, costs go up and innovation suffers as capital is diverted into politically-favored choices where it ends up wasted. The debacles with the Chevy Volt and Solera are just two recent examples of the dangers inherent when bureaucrats try to pick winners, or give a subset of companies in one industry an assist and the expense of others.

This post originally appeared Feb. 1, 2016 on the R Street Institute official blog.

]]>
https://techliberation.com/2016/02/02/the-fcc-targets-cable-set-top-boxes-why-now/feed/ 3 75983
New Paper on The Sharing Economy and Consumer Protection Regulation https://techliberation.com/2014/12/08/new-paper-on-the-sharing-economy-and-consumer-protection-regulation/ https://techliberation.com/2014/12/08/new-paper-on-the-sharing-economy-and-consumer-protection-regulation/#comments Mon, 08 Dec 2014 15:06:54 +0000 http://techliberation.com/?p=75035

Sharing Economy paper from MercatusI’ve just released a short new paper, co-authored with my Mercatus Center colleagues Christopher Koopman and Matthew Mitchell, on “The Sharing Economy and Consumer Protection Regulation: The Case for Policy Change.” The paper is being released to coincide with a Congressional Internet Caucus Advisory Committee event that I am speaking at today on “Should Congress be Caring About Sharing? Regulation and the Future of Uber, Airbnb and the Sharing Economy.”

In this new paper, Koopman, Mitchell, and I discuss how the sharing economy has changed the way many Americans commute, shop, vacation, borrow, and so on. Of course, the sharing economy “has also disrupted long-established industries, from taxis to hotels, and has confounded policymakers,” we note. “In particular, regulators are trying to determine how to apply many of the traditional ‘consumer protection’ regulations to these new and innovative firms.” This has led to a major debate over the public policies that should govern the sharing economy.

We argue that, coupled with the Internet and various new informational resources, the rapid growth of the sharing economy alleviates the need for much traditional top-down regulation. These recent innovations are likely doing a much better job of serving consumer needs by offering new innovations, more choices, more service differentiation, better prices, and higher-quality services. In particular, the sharing economy and the various feedback mechanism it relies upon helps solve the tradition economic problem of “asymmetrical information,” which is often cited as a rationale for regulation. We conclude, therefore, that “the key contribution of the sharing economy is that it has overcome market imperfections without recourse to traditional forms of regulation. Continued application of these outmoded regulatory regimes is likely to harm consumers.”

We note that this is especially likely to be the case when the failure of traditional regulatory models is taken into account. As we document in the paper, all too often, well-intentioned “public interest” regulation is often captured by industry and used to to serve their interests:

by limiting entry, or by raising rivals’ costs, regulations can be useful to the regulated firms. Though regulations often make consumers worse off, they are often sustained by political pressure from consumer advocates because they can be disguised as “consumer protection.”

We provide evidence of the problem of regulatory capture and note it has been a particular problem in many of the sectors that are now being disrupted by sharing economy innovators–such as taxi and transportation services. It is evident that regulation has not lived up to its lofty expectations in many sectors. Accordingly, when market circumstances change dramatically—or when new technology or competition alleviate the need for regulation—then public policy should evolve and adapt to accommodate these new realities.

Of course, many bad laws and regulations that policymakers remain on the books and have constituencies who will defend them vociferously. Our paper concludes with some recommendations for how to “level the regulatory playing field” in a pro-consumer, pro-innovation fashion. We note that while differential regulatory treatment of incumbents and new entrants does represent a potential problem, there’s a sensible, pro-consumer and pro-innovation way to solve that problem:

such regulatory asymmetries represent a legitimate policy problem. But the solution is not to punish new innovations by simply rolling old regulatory regimes onto new technologies and sectors. The better alternative is to level the playing field by “deregulating down” to put everyone on equal footing, not by “regulating up” to achieve parity. Policymakers should relax old rules on incumbents as new entrants and new technologies challenge the status quo. By extension, new entrants should only face minimal regulatory requirements as more onerous and unnecessary restrictions on incumbents are relaxed.

Download this new paper on the Mercatus website or via SSRN or ResearchGate. Incidentally, we plan to release a much longer Mercatus Center white paper early next year that will explore reputational feedback mechanisms in far greater detail and explain how these systems help address the problem of “asymmetrical information” in these and other contexts.


Also see:The Debate over the Sharing Economy: Talking Points & Recommended Reading,” which includes the following video of me on the Stossel Show discussing these issues recently.

]]>
https://techliberation.com/2014/12/08/new-paper-on-the-sharing-economy-and-consumer-protection-regulation/feed/ 1 75035
The Debate over the Sharing Economy: Talking Points & Recommended Reading https://techliberation.com/2014/09/26/the-debate-over-the-sharing-economy-talking-points-recommended-reading/ https://techliberation.com/2014/09/26/the-debate-over-the-sharing-economy-talking-points-recommended-reading/#comments Fri, 26 Sep 2014 15:40:11 +0000 http://techliberation.com/?p=74792

The sharing economy is growing faster than ever and becoming a hot policy topic these days. I’ve been fielding a lot of media calls lately about the nature of the sharing economy and how it should be regulated. (See latest clip below from the Stossel show on Fox Business Network.) Thus, I sketched out some general thoughts about the issue and thought I would share them here, along with some helpful additional reading I have come across while researching the issue. I’d welcome comments on this outline as well as suggestions for additional reading. (Note: I’ve also embedded some useful images from Jeremiah Owyang of Crowd Companies.)

1) Just because policymakers claim that regulation is meant to protect consumers does not mean it actually does so.

  1. Cronyism/ Rent-seeking: Regulation is often “captured” by powerful and politically well-connected incumbents and used to their own benefit. (+ Lobbying activity creates deadweight losses for society.)
  2. Innovation-killing: Regulations become a formidable barrier to new innovation, entry, and entrepreneurism.
  3. Unintended consequences: Instead of resulting in lower prices & better service, the opposite often happens: Higher prices & lower quality service. (Example: Painting all cabs same color destroying branding & ability to differentiate).

2) The Internet and information technology alleviates the need for top-down regulation & actually does a better job of serving consumers.

  1. Ease of entry/innovation in online world means that new entrants can come in to provide better options and solve problems previously thought to be unsolvable in the absence of regulation.
  2. Informational empowerment: The Internet and information technology solves old problem of lack of consumer access to information about products and services. This gives them monitoring tools to find more and better choices. (i.e., it lowers both search costs & transaction costs). (“To the extent that consumer protection regulation is based on the claim that consumers lack adequate information, the case for government intervention is weakened by the Internet’s powerful and unprecedented ability to provide timely and pointed consumer information.” – John C. Moorhouse)
  3. Feedback mechanisms (product & service rating / review systems) create powerful reputational incentives for all parties involved in transactions to perform better.
  4. Self-regulating markets: The combination of these three factors results in a powerful check on market power or abusive behavior. The result is reasonably well-functioning and self-regulating markets. Bad actors get weeded out.
  5. Law should evolve: When circumstances change dramatically, regulation should as well. If traditional rationales for regulation evaporate, or new technology or competition alleviates need for it, then the law should adapt.

3) Sharing economy has demonstrably improved consumer welfare. It provides:

  1. more choices / competition
  2. more service innovation / differentiation
  3. better prices
  4. higher quality services  (safety & cleanliness /convenience / peace of mind)
  5. Better options & conditions for workers

4) If we need to “level the (regulatory) playing field,” best way to do so is by “deregulating down” to put everyone on equal footing; not by “regulating up” to achieve parity.

  1. Regulatory asymmetry is real: Incumbents are right that they are at disadvantage relative to new sharing economy start-ups.
  2. Don’t punish new innovations for it: But solution is not to just roll the old regulatory regime onto the new innovators.
  3. Parity through liberalization: Instead, policymakers should “deregulate down” to achieve regulatory parity. Loosen old rules on incumbents as new entrants challenge status quo.
  4. “Permissionless innovation” should trump “precautionary principle” regulation: Preemptive, precautionary regulation does not improve consumer welfare. Competition and choice do better. Thus, our default position toward the sharing economy should be “innovation allowed” or permissionless innovation.
  5. Alternative remedies exist: Accidents will always happen, of course. But insurance, contracts, product liability, and other legal remedies exist when things go wrong. The difference is that ex post remedies don’t discourage innovation and competition like ex ante regulation does. By trying to head off every hypothetical worst-case scenario, preemptive regulations actually discourage many best-case scenarios from ever coming about.

5) Bottom line = Good intentions only get you so far in this world.

  1. Just because a law was put on the books for noble purposes, it does not mean it really accomplished those goals, or still does so today.
  2. Markets, competition, and ongoing innovation typically solve problems better than law when we give them a chance to do so.

[P.S. On 9/30, my Mercatus Center colleague Matt Mitchell posted this excellent follow-up essay building on my outline and improving it greatly.]

Sharing Economy Taxonomy-001

Why People Use Sharing Services Source: Jeremiah Owyang, Crowd Companies

Additional Reading

]]>
https://techliberation.com/2014/09/26/the-debate-over-the-sharing-economy-talking-points-recommended-reading/feed/ 2 74792
Sherwin Siy on digital copyright https://techliberation.com/2013/08/13/sherwin-siy-on-digital-copyright/ https://techliberation.com/2013/08/13/sherwin-siy-on-digital-copyright/#respond Tue, 13 Aug 2013 10:00:47 +0000 http://techliberation.com/?p=45488

Sherwin Siy, Vice President of Legal Affairs at Public Knowledge, discusses emerging issues in digital copyright policy. He addresses the Department of Commerce’s recent green paper on digital copyright, including the need to reform copyright laws in light of new technologies. This podcast also covers the DMCA, online streaming, piracy, cell phone unlocking, fair use recognition, digital ownership, and what we’ve learned about copyright policy from the SOPA debate.

Download

Related Links

]]>
https://techliberation.com/2013/08/13/sherwin-siy-on-digital-copyright/feed/ 0 45488
Marc Hochstein on bitcoin https://techliberation.com/2013/04/16/marc-hochstein/ https://techliberation.com/2013/04/16/marc-hochstein/#respond Tue, 16 Apr 2013 10:00:45 +0000 http://techliberation.com/?p=44516 American Banker,  a leading media outlet covering the banking and financial services community, discusses bitcoin. ]]>

Marc Hochstein, Executive Editor of American Banker,  a leading media outlet covering the banking and financial services community, discusses bitcoin.

According to Hochstein, bitcoin has made its name as a digital currency, but the truly revolutionary aspect of the technology is its dual function as a payment system competing against companies like PayPal and Western Union. While bitcoin has been in the news for its soaring exchange rate lately, Hochstein says the actual price of bitcoin is really only relevant for speculators in the short-term; in the long-term, however, the anonymous, decentralized nature of bitcoin has far-reaching implications.

Hochstein goes on to talk about  the new market in bitcoin futures and some of bitcoin’s weaknesses—including the volatility of the bitcoin market.

Download

Related Links

]]>
https://techliberation.com/2013/04/16/marc-hochstein/feed/ 0 44516
Joshua Gans on the economics of information https://techliberation.com/2013/04/02/joshua-gans/ https://techliberation.com/2013/04/02/joshua-gans/#respond Tue, 02 Apr 2013 10:00:10 +0000 http://techliberation.com/?p=44408

Joshua Gans, professor of Strategic Management at the University of Toronto’s Rotman School of Management and author of the new book Information Wants to be Shared, discusses modern media economics, including how books, movies, music, and news will be supported in the future.

Gans argues that sharing enhances most information’s value. He also explains that the business models of traditional media companies, gatekeepers who have relied on scarcity and control, have collapsed in the face of new technologies. Equally important, he argues that sharing can revive moribund, threatened industries even as he examines platforms that have, almost accidentally, thrived in this new environment.

Download

Related Links

]]>
https://techliberation.com/2013/04/02/joshua-gans/feed/ 0 44408
Gabriella Coleman on the ethics of free software https://techliberation.com/2013/01/08/gabriella-coleman-2/ https://techliberation.com/2013/01/08/gabriella-coleman-2/#respond Tue, 08 Jan 2013 14:15:33 +0000 http://techliberation.com/?p=43410

Gabriella Coleman, the Wolfe Chair in Scientific and Technological Literacy in the Art History and Communication Studies Department at McGill University, discusses her new book, “Coding Freedom: The Ethics and Aesthetics of Hacking,” which has been released under a Creative Commons license.

Coleman, whose background is in anthropology, shares the results of her cultural survey of free and open source software (F/OSS) developers, the majority of whom, she found, shared similar backgrounds and world views. Among these similarities were an early introduction to technology and a passion for civil liberties, specifically free speech.

Coleman explains the ethics behind hackers’ devotion to F/OSS, the social codes that guide its production, and the political struggles through which hackers question the scope and direction of copyright and patent law. She also discusses the tension between the overtly political free software movement and the “politically agnostic” open source movement, as well as what the future of the hacker movement may look like.

Download

Related Links

]]>
https://techliberation.com/2013/01/08/gabriella-coleman-2/feed/ 0 43410
Book Review: Christopher Yoo’s “The Dynamic Internet” https://techliberation.com/2012/10/02/book-review-christopher-yoos-the-dynamic-internet/ https://techliberation.com/2012/10/02/book-review-christopher-yoos-the-dynamic-internet/#respond Tue, 02 Oct 2012 18:13:29 +0000 http://techliberation.com/?p=42487

Looking for a concise overview of how Internet architecture has evolved and a principled discussion of the public policies that should govern the Net going forward? Then look no further than Christopher Yoo‘s new book, The Dynamic Internet: How Technology, Users, and Businesses are Transforming the Network. It’s a quick read (just 140 pages) and is worth picking up.  Yoo is a Professor of Law, Communication, and Computer & Information Science at the University of Pennsylvania and also serves as the Director of the Center for Technology, Innovation & Competition there. For those who monitor ongoing developments in cyberlaw and digital economics, Yoo is a well-known and prolific intellectual who has established himself as one of the giants of this rapidly growing policy arena.

Yoo makes two straight-forward arguments in his new book. First, the Internet is changing. In Part 1 of the book, Yoo offers a layman-friendly overview of the changing dynamics of Internet architecture and engineering. He documents the evolving nature of Internet standards, traffic management and congestion policies, spam and security control efforts, and peering and pricing policies. He also discusses the rise of peer-to-peer applications, the growth of mobile broadband, the emergence of the app store economy, and what the explosion of online video consumption means for ongoing bandwidth management efforts. Those are the supply-side issues. Yoo also outlines the implications of changes in the demand-side of the equation, such as changing user demographics and rapidly evolving demands from consumers. He notes that these new demand-side realities of Internet usage are resulting in changes to network management and engineering, further reinforcing changes already underway on the supply-side.

Yoo’s second point in the book flows logically from the first: as the Internet continues to evolve in such a highly dynamic fashion, public policy must as well. Yoo is particularly worried about calls to lock in standards, protocols, and policies from what he regards as a bygone era of Internet engineering, architecture, and policy. “The dramatic shift in Internet usage suggests that its founding architectural principles form the mid-1990s may no longer be appropriate today,” he argues. (p. 4) “[T]he optimal network architecture is unlikely to be static. Instead, it is likely to be dynamic over time, changing with the shifts in end-user demands,” he says. (p. 7) Thus, “the static, one-size-fits-all approach that dominates the current debate misses the mark.” (p. 7)

Yoo makes a particular powerful case for flexible network pricing policies. His outstanding chapter on “The Growing Complexity of Internet Pricing” offers an excellent overview of the changing dynamics of pricing in this arena and explains why experimentation with different pricing methods and business models must be allowed to continue. Getting pricing right is essential, Yoo notes, if we hope to ensure ongoing investment in new networks and services. He also notes how foolish it is to expect the government to come in and save the day thought massive infrastructure investment to cover the hundreds of billions of dollars needed to continue to build-out high-speed services:

Most industry and political observers believe that the federal government will not be in a position to allocate that amount of money to upgrade our nation’s broadband infrastructure for the foreseeable future. The next-generation network will thus be built by private enterprise. But private corporations cannot be expected to undertake such investments unless they have a reasonable prospect of recovering their upfront costs from consumers who are using the increased bandwidth and other enhancements to the existing network. (p. 102)

Again, that’s why flexible pricing policies and ongoing experimentation with various business models is vital. This insight is particularly timely in light of the recent renewed interest in data caps. A lot of people who don’t know a lick about economics and have never run a real business in their lives are seemingly obsessed with telling private operators how to run theirs. If the Net neutrality wars devolve into a battle over price controls — exactly as I predicted they would 7 years ago this month — then we could be headed for a day when federal policymakers derail the advances in broadband we’ve seen in recent years by substituting mandates for markets.

Throughout the second half of his book, Yoo explains why that would be a disaster for consumers and high-tech innovation. To most of us, the arguments Yoo advances here are perfectly logical, but to many Ivory Tower intellectuals who dominate Net policy debates today, it will all be considered apostasy of the very highest order. Those that elevate Net neutrality and so-called “public interest” regulation to quasi-religious concepts will likely be constructing Christopher Yoo voodoo dolls and attempting to sew his mouth shut. Yet, the policy standard Yoo is advancing here is perfectly logical. In essence, he’s trying to counter the gradual growth of a Precautionary Principle mindset for Internet policy. Here’s how he puts it:

Just as engineers must design structures that preserve room for experimentation, so must regulators. In particular, regulators should avoid promulgating policies that foreclose certain technical approaches or require industry actors to obtain advance approval before they can experiment with new technological solutions. The benefits of most practices will remain ambiguous before they are deployed, and placing the burden on industry actors to prove consumer benefit before implementation would chill experimentation and effectively prevent ambiguous practices from ever being deployed. This in turn would prevent engineers from obtaining the real-world experience they need to evaluate different technological solutions and eliminate the breathing room on which technological progress depends. In the face of uncertainty, policymakers should not attempt to predict which particular network solution will ultimately prevail; rather, they ought to focus on creating regulatory structures that give industry participants the freedom to pursue a wide range of business strategies and allow consumers to decide which one (or ones, if consumer demand is sufficiently diverse to support multiple business models targeted at different market niches) ultimately proves to be the best.” (p. 8)

In other words, public policy must not restrict experimentation based on conjectural fears and boogeyman scenarios. Public policy should generally seek to avoid ex ante forms of preemptive, prophylactic Internet regulation and instead rely on an ex post approach when and if things go wrong. As I have argued here many times before, as a general rule, our policymakers should embrace “techno-agnosticism” toward ongoing debates over standards, protocols, business models, pricing methods, and so on. Lawmakers should not be preemptively tilting the balance in one direction or the other or, worse yet, restricting experimentation that can help us find superior solutions. Here’s how Yoo articulates this same principle of techno-agnosticism:

network engineering is inherently an exercise in tradeoffs that does not lend itself to broad generalizations. There is no such thing as a perfect, inherently superior architecture. Instead, the optimal infrastructure for any particular network depends on the nature of the flows passing through the network as well as the costs of the technologies comprising the network. This perspective stands in stark contrast to the categorical tone that has dominated debates over Internet policy for the past five years. (p. 138)

Indeed it does. If you read through books by Zittrain, Lessig, Wu, van Schewick, Frischmann, and others, you will notice the consistent assertion that we already have the magic formula for the Internet and all networks, for that matter. It almost always comes down to what I have referred to as an ideology of “openness at any cost” or “neutrality uber alles.” In this religion, everything is subservient to openness and neutrality, no matter what the cost (and no matter how defined, even if that is much trickier than those academics let on). But for all the reasons Yoo lays out in his book, we should reject neutrality uber alles as the basis of public policy. “The shifts in the technological and economic environment surrounding the network should remind everyone involved in Internet policy of the importance of embracing change.” (p. 139).  Again, that counsels techno-agnosticism and light-touch, responsive regulation — not a preemptive Precautionary Principle for Internet decision-making. As Yoo states in his conclusion:

Perhaps the best means for creating such an environment is to create a regulatory-enforcement regime that evaluates any charges of improper behavior on a case-by-case basis after the fact… So long as the burden of proof is placed on the party challenging the practice, such a regime should provide sufficient breathing room for industry participants to experiment with new solutions for emerging problems while simultaneously safeguarding consumers against any anticompetitive practices. (p. 139).

And even under that regime, Yoo makes it clear throughout the book that there should be a very high bar established before regulation is pursued. This is particularly true because of the First Amendment values at stake when the government attempts to regulate speech platforms. In Chapter 9 of the book, Yoo walks the reader through all the relevant case law on this front and makes it clear how “the Supreme Court has repeatedly recognized that the editorial discretion exercised by intermediaries serves important free speech values.” (p. 120). Yoo also makes the case that a certain degree of intermediation helps serve consumer needs by helping them more easily find the content and services they desire. Law should not seek to constrain that and, under current Supreme Court First Amendment jurisprudence, it probably cannot.

So, in conclusion, I strongly encourage everyone to pick up a copy of Christopher Yoo’s  Dynamic Internet. It strikes just the right balance for Net governance and public policy in the information age. It all comes down to flexibility and freedom.  If the Internet and all modern digital technologies are to thrive, we must reject the central planner’s mindset that dominated the analog era and forever bury all the static thinking it entailed.

Additional Reading:

]]>
https://techliberation.com/2012/10/02/book-review-christopher-yoos-the-dynamic-internet/feed/ 0 42487
John Palfrey on interoperability https://techliberation.com/2012/06/19/john-palfrey/ Tue, 19 Jun 2012 06:30:50 +0000 http://techliberation.com/?p=41459 Interop: The Promise and Perils of Highly Interconnected Systems. Interoperability is a term used to describe the standardization and integration of technology. Palfrey discusses how the term can describe many relationships in the world and that it doesn't have to be limited to technical systems. He also describes potential pitfalls of too much interoperability. Palfrey finds that greater levels of interoperability can lead to greater competition, collaboration, and the development of standards. It can also lead to giving less protection to privacy and security. The trick is to get to the right level of interoperability. If systems become too complex, then nobody can understand them and they can become unstable. Palfrey describes the current financial crises could be an example of this. Palfrey also describes the difficulty in finding the proper role of government in encouraging or discouraging interoperability.]]>

John Palfrey of the Berkmann Center at Harvard Law School, discusses his new book written with Urs Gasser, Interop: The Promise and Perils of Highly Interconnected Systems. Interoperability is a term used to describe the standardization and integration of technology. Palfrey discusses how the term can describe many relationships in the world and that it doesn’t have to be limited to technical systems. He also describes potential pitfalls of too much interoperability. Palfrey finds that greater levels of interoperability can lead to greater competition, collaboration, and the development of standards. It can also lead to giving less protection to privacy and security. The trick is to get to the right level of interoperability. If systems become too complex, then nobody can understand them and they can become unstable. Palfrey describes the current financial crises could be an example of this. Palfrey also describes the difficulty in finding the proper role of government in encouraging or discouraging interoperability.

Download

Related Links

]]>
41459
What is “Optimal Interoperability”? A Review of Palfrey & Gasser’s “Interop” https://techliberation.com/2012/06/11/what-is-%e2%80%9coptimal-interoperability%e2%80%9d-a-review-of-palfrey-gasser%e2%80%99s-%e2%80%9cinterop%e2%80%9d/ https://techliberation.com/2012/06/11/what-is-%e2%80%9coptimal-interoperability%e2%80%9d-a-review-of-palfrey-gasser%e2%80%99s-%e2%80%9cinterop%e2%80%9d/#comments Mon, 11 Jun 2012 17:36:47 +0000 http://techliberation.com/?p=41384

I’m pretty rough on all the Internet and info-tech policy books that I review. There are two reasons for that. First, the vast majority of tech policy books being written today should never have been books in the first place. Most of them would have worked just fine as long-form (magazine-length) essays. Too many authors stretch a promising thesis into a long-winded, highly repetitive narrative just to say they’ve written an entire book about a subject. Second, many info-tech policy books are poorly written or poorly argued. I’m not going to name names, but I am frequently unimpressed by the quality of many books being published today about digital technology and online policy issues.

The books of Harvard University cyberlaw scholars John Palfrey and Urs Gasser offer a welcome break from this mold. Their recent books, Born Digital: Understanding the First Generation of Digital Natives, and Interop: The Promise and Perils of Highly Interconnected Systems, are engaging and extremely well-written books that deserve to be books. There’s no wasted space or mindless filler. It’s all substantive and it’s all interesting. I encourage aspiring tech policy authors to examine their works for a model of how a book should be done.

In a 2008 review, I heaped praise on Born Digital and declared that this “fine early history of this generation serves as a starting point for any conversation about how to mentor the children of the Web.” I still recommend highly to others today. I’m going to be a bit more critical of their new book, Interop, but I assure you that it is a text you absolutely must have on your shelf if you follow digital policy debates. It’s a supremely balanced treatment of a complicated and sometimes quite contentious set of information policy issues.

In the end, however, I am concerned about the open-ended nature of the standard that Palfrey and Gasser develop to determine when government should intervene to manage or mandate interoperability between or among information systems. I’ll push back against their amorphous theory of “optimal interoperability” and offer an alternative framework that suggests patience, humility, and openness to ongoing marketplace experimentation as the primary public policy virtues that lawmakers should instead embrace.

Interop is Important, but Often Difficult & Filled with Trade-Offs

Palfrey and Gasser begin by noting that “there is no single, agreed-upon definition of interoperability” and that “there are even many views about what interop is and how it should be achieved” (p. 5). They set out to change that by developing “a normative theory identifying what we want out of all this interconnectivity” that the information age has brought us (p. 3).

Generally speaking, Palfrey and Gasser believe increased interoperability — especially among information networks and systems — is a good thing because it “provides consumers greater choice and autonomy” (p. 57), “is generally good for competition and innovation” (p. 90), and “can lead to systemic efficiencies” (p. 129).

But they wisely acknowledge that there are trade-offs, too, noting that “this growing level of interconnectedness comes at an increasingly high price” (p. 2). Whether we are talking about privacy, security, consumer choice, the state of competition, or anything else, Palfrey and Gasser argue that “the problems of too much interconnectivity present enormous challenges both for organizations and for society at large” (p. 2). Their chapter and privacy and security offers many examples, but one need only look around at their own digital existence to realize the truth of this paradox. The more interconnected our information systems become, and the more intertwined our social and economic lives become with those systems, the greater the possibility of spam, viruses, data breaches, and various types of privacy or reputational problems. Interoperability giveth and it taketh away.

When Does “the Public Interest” Demand Interoperability Regulation?

So, how do we know when increased interoperability is good for us or society? How do we strike a reasonable balance? And, most controversially, when should government intervene to tip the balance in one direction or another?

Palfrey and Gasser return to these questions repeatedly throughout the book but admit that their answers will be dissatisfying since “there is no single form or optimal amount of interoperability that will suit every circumstance” (p. 76). Thus, “most of the specifics of how to bring interop about [must] be determined on a case-by-case basis (p. 17). They elaborate:

That can feel unsatisfying. But it is an essential truth: the most interesting interop problems relate to society’s most complex and most fundamental systems. Their answers are never simple to come by, nor are they easy to implement. This characteristic of interop theory is a feature, not a bug. … The price to be paid for striving for a universal principle at the level of theory is that such a theory is full of nuances when it comes to application and practice (p. 17-18).

Fair enough. Yet, Palfrey and Gasser also make it clear they want government(s) to play an active role in ensuring optimal interoperability. They say they favor “blended approaches that draw upon the comparative advantages of the private and public sector” (p. 161), but they argue that government should feel free to tip or nudge interoperability determinations in superior directions. “If deployed with skill,” they argue, “the law can play a central role in ensuring that we get as close as possible to optimal levels of interoperability in complex systems” (p. 88).

That phrase — “optimal level of interoperability” — pops up repeatedly throughout the book. So, too, does the phrase “the public interest.” Palfrey and Gasser argue that governments must look out for “the public interest” and “optimal interoperability” since “market forces do not automatically lead to appropriate standards or to the adoption of the best available technology” (p. 167). Here they introduce two additional amorphous values that complicate the debate: “appropriate standards” and “best available technology.”

The fundamental problem this “public interest” approach to interoperability regulation is that it is no better than the “I-know-it-when-I-see-it” standard we sometimes at work in the realm of speech regulation. It’s an empty vessel, and if it is the lodestar by which policymakers make determinations about the optimal level of interoperability, then it leaves markets, innovators, and consumers subject to the arbitrary whims of what a handful of politicians or regulators think constitutes “optimal interoperability,” “appropriate standards,” and “best available technology.”

On the Limits of Knowledge

Palfrey and Gasser’s framework feels more than just “unsatisfying” in this regard; it feels downright insufficient. That’s because it is missing a major variable: the extent to which state actors are able to adequately define those terms or accurately forecast the future needs of markets or citizen-consumers.

Surprisingly, Palfrey and Gasser don’t really spend much time discussing the specific remedies the state might impose to achieve optimal interoperability. I would have liked to have seen them develop a matrix of interop options and then outline the strengths and weaknesses of each. But even absent a more detailed discussion of possible regulatory remedies, I would have settled for more concrete answers to the following questions: Why are we to assume that regulators possess the requisite knowledge needed to know when it makes sense to foreclose ongoing marketplace experimentation? And why should we trust that, by substituting their own will for that of countless other actors in the information technology marketplace, we will be left better off?

The closest Palfrey and Gasser get to defining a firm standard for when and why such state intervention is warranted comes on page 173 when they are discussing the need for the state to establish sound reasons for intervention. They argue:

The objective should not be interoperability per se but, rather, one or more public policy goal to which interoperability can lead. The goals that usually make sense are innovation and competition, but other objectives might include consumer choice, ease of use of a technology or system, diversity, and so forth (p. 173).

This is a bit better, but it still doesn’t fully grapple with the cost side of the cost-benefit calculus for intervention. Palfrey and Gasser are willing to at least acknowledge some of those problems when they remark that “the state is rarely in a position to call a winner among competing technologies” (p. 174). Moreover,

Lawmakers need to keep in view the limits of their own effectiveness when it comes to accomplishing optimal levels of interoperability. Case studies of government intervention, especially where complex information technologies are involved, show that states tend to be ill suited to determine on their own what specific technology will be the best option for the future (p. 175)

Quite right! Yet, that insight does not seem to influence their calls elsewhere in the book for regulatory activism. That’s a shame since the admonition about policymakers recognizing the “limits of their own effectiveness” should be able to help us devise some limiting principles regarding the state’s role.

Toward an Alternative Theory: Experimental, Evolutionary Interoperability

Allow me to offer a different theory of optimal interoperability that flows from these previous insights. It’s based on a more dynamic view of markets and the central importance of experimentation in the face of uncertainty. Let me just go ahead and articulate the core principles of what I will refer to as  “experimental, evolutionary interoperability theory.” Then I’ll explain it in more detail

  • Experimental, evolutionary interoperability : The theory that ongoing marketplace experimentation with technical standards, modes of information production and dissemination, and interoperable information systems, is almost always preferable to the artificial foreclosure of this dynamic process through state action. The former allows for better learning and coping mechanisms to develop while also incentivizing the spontaneous, natural evolution of the market and market responses. The latter (regulatory foreclosure of experimentation) limits that potential.

Palfrey and Gasser would label this a “laissez-faire” theory of interoperability and oppose it since they believe “a pure laissez-faire approach to interop rarely works out well” (p. 160). But they are wrong, at least to the extent they include the sweeping modifier “rarely” to describe this model’s effectiveness. In reality, the vast majority of interoperability that occurs into today’s information economy happens in a completely natural, evolutionary fashion without any significant state intervention whatsoever. In countless small and big ways alike, interconnection and interoperability happens every day throughout society. Yes, it is true that interoperability often happens against the backdrop of a legal system that allows court action to enforce certain rights or address perceived harms, but I would not classify that as a significant direct state intervention to tip or nudge interconnection decisions in one direction or another. And when interoperability doesn’t happen naturally, there are often good reasons it doesn’t and, even if there aren’t, non-interop spawns beneficial marketplace reactions and innovations.

Experimental, evolutionary interoperability theory flows out of Schumpeterian competition theory and the related field of evolutionary economics, but it is also heavily influenced by public choice theory (which stresses the limitations of romanticized theories of politics, planning, and “public interest” regulation). This alternative theory begins by accepting the simple fact that, as Austrian economist F.A. Hayek taught us, “progress by its very nature cannot be planned.” The wiser man, Hayek noted, “is very much aware that we do not know all the answers and that he is not sure that the answers he has are certainly the right ones or even that we can find all the answers.”

Ongoing experimentation with varying business models and modalities of social and economic production allows us to see what consumer choice and trial and error experimentation yields naturally over time. Ongoing experiments with flexible, voluntary interop standards and negotiations also allows us to determine which technological standards seem to benefit consumers in the short-term while also encouraging innovators to leap-frog existing standards and platforms when they become locked-in for too long or seem sub-optimal.

In the short-term, it is entirely possible that such voluntary, evolutionary interop experiments “fail” in various ways. That is often a good thing. Failures are how individuals and a society learn to cope with change and devise systems and solutions to accommodate technological change. As Samuel Beckett once counseled: “Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better.” Progress depends upon an embrace of this uncertainty and acceptance of a world of constant upheaval if we are to learn how to cope, adapt, and move forward.

In this model, technological innovation often springs from the quest for the prize of market power.  Palfrey and Gasser generally reject this Schumpeterian vision of dynamic competition, but they at least do a nice job of describing it:

firms may have a stronger incentive to be innovative when low levels of interoperability promise higher or even monopoly profits. This sort of competition… creates incentives for firms to come up with entirely new generations of technologies or business methods that are proprietary (p. 121).

They reject this approach based on (1) the mistaken notion that the quest of the prize of market power ends in the attainment and preservation of that market power; and (2) the belief that policymakers possess the ability to set us on a better course through wise interventions.

In a moment, I’ll prove why that is misguided by examining a few real-world cases studies. For now, however, let’s return to Palfrey & Gasser’s central operating principle and contrast it with the vision I’ve articulated here. Recall that they argue “it is important to maintain and facilitate diversity in the marketplace. We simply want systems to work together when we want them to and to not work together when we do not.” Again, there is no standard here if one is suggesting this as the principle by which to determine when state intervention is desirable . But if one is looking at that aspirational statement as a description of the natural order of things — namely, that we do indeed “want systems to work together when we want them to and to not work together when we do not” — then that is a perfectly sound principle for understanding why state intervention should be disfavored in all but the most extreme circumstances. To reiterate: We should not allow the state to foreclose interoperability experiments because (a) those experiments have value in and of themselves, and (b) state action is likely to have myriad unintended consequences and unforeseen costs that are not easily remedied or reversed.

There are moments in the book when Palfrey and Gasser appear somewhat sympathetic to the sort of alternative “evolutionary interop” theory I have articulated here. For example, they note that:

The web is a great equalizer for technology firms. As consumers, we have come to expect that everything will work together without incident or interruption. We think it bizarre when something in the digitally networked world does not mesh with something else, perceiving whatever it is to be broken, in need of repair. This high degree of expectation is a powerful driver of interoperability. Market players are increasingly responding to this consumer demand and making these invisible links work for their customers without any government intervention” (p. 28) [italics added]

You won’t be surprised to hear that I agree wholeheartedly! Moreover, what it really proves is that ongoing marketplace experimentation and the evolution of norms and standards generally solve interoperability problems as they develop. That doesn’t mean markets are perfectly competitive or always produce perfect interoperability. But, again, why should we believe state intervention will do a better job? And isn’t it possible that intervention could negatively counter those natural instincts that Palfrey and Gasser describe about how consumers and market actors interact to make those “invisible links” work out as nicely as they do today?

Interop, Competition & Innovation: Some Cases Studies of Evolutionary Interoperability in Action

To better explain experimental, evolutionary interop theory and how it plays out in the real-world, let’s examine the complex relationship between interoperability, competition, and innovation in the information economy through the prism of three case studies: AOL and instant messaging, video game consoles, and smartphones.

AOL

America Online’s (AOL) case study is probably the most profound example of Schumpeterian creative destruction rapidly eroding the market power of a once “dominant” digital giant. Not long ago, AOL was cast as the great villain of online openness and interoperability. In fact, when Lawrence Lessig penned his acclaimed book Code in the late 1990s, AOL was supposedly set to become the corporate enslaver of cyberspace.

For a time, it was easy to see why Lessig and others were worried. Twenty five million subscribers were willing to pay $20 per month to get a guided tour of AOL’s walled garden version of the Internet. Then AOL and media titan Time Warner announced a historic mega-merger that had some predicting the rise of “new totalitarianisms” and corporate “Big Brother.”

Fearing the worst, several conditions were placed on approval of the merger by both the Federal Trade Commission and the Federal Communication Commission. These included “open access” provisions that forced Time Warner to offer the competing ISP service from the second largest ISP at that time (Earthlink) before it made AOL’s service available across its largest cable divisions.  Another provision imposed by the FCC mandated interoperability of instant messaging systems based on the fear that AOL was poised to monopolize that emerging technology.

Palfrey and Gasser suggest this was a necessary and effective intervention. “The AOL IM case is another instance in which the role of government was key in establishing a more interoperable ecosystem” and they credit the FCC’s action with cutting AOL’s share of the IM (p. 68-9). That’s a huge stretch. The reality is that markets and technologies evolved around AOL’s walled garden and decimated whatever advantage the firm had in either the web portal business or instant messaging market.

First, despite all the hand-wringing and regulatory worry, AOL’s merger with Time Warner quickly went off the rails and AOL’s online “dominance” quickly evaporated. Looking back at the deal with TW, Fortune magazine senior editor Allan Sloan called it the “turkey of the decade” since it cost shareholders hundreds of billions. Second, AOL’s attempt to construct the largest walled garden ever also failed miserably as organic search and social networking flourished. Consumers showed they demanded more than the hand-held tour of cyberspace.

Finally, the hysteria about AOL’s threat to monopolize instant messaging and deny interoperability proved particularly unwarranted and also serves as a cautionary tale for those who argue regulation is needed to solve interoperability problems. At the time, well-heeled major competitors like Yahoo and Microsoft already had significant competing IM platforms, and others were rapidly developing. Interoperability among those systems was also spontaneously developing as consumers demanded greater flexibility among and within their communications systems. The development of Trillian, which allowed IM users to see all their various IM feeds at once, was an early precursor of what was to come. Today, anyone can download a free chat client like Digsby or Adium to manage multiple IM and email services from Yahoo!, Google, Facebook and just about anyone else, all within a single interface, essentially making it irrelevant which chat service friends use.

In a truly Schumpetrian sense, innovators came in and disrupted AOL’s plans to dominate instant messaging with innovative offerings that few critics or regulators would have believed possible just a decade ago. Progress happened, and nobody planned it from above. The FCC’s IM interoperability provision was quietly sunset less than three years after its inception since the evolution of technology and markets had rapidly eliminated the perceived problem. That mandate, as it turned out, wasn’t needed at all, and all it probably accomplished during its short life span was to hobble AOL’s ability to find a way to remain relevant in the increasingly competitive Web. 2.0 world.

Video game consoles

At first blush, the video game console wars might seem like the ideal case study for those who favor greater interoperability regulation. After all, in a static sense, why do we really need several competing video game platforms that prevent consumers from playing their games on more than one system? The lack of console interoperability also drives up development costs for game makers. Many of those developers would prefer to just code games for a single, universal gaming platform. Therefore, isn’t this the perfect excuse for state intervention to ensure “optimal interoperability”?

To the contrary, this is another example of why government should generally avoid intervening to try to achieve some sort of artificial optimal interoperability. This market has undergone continuous, turbulent change and witnessed remarkable pro-consumer innovation despite a lack of interoperability.

The video game console wars have raged since the late 1970s. The first generation of consoles was dominated Atari (2600), Mattel (Intellivision), and Coleco (ColecoVision). By the mid-1980s, the industry saw a new cast of characters displace the old players. Nintendo (NES), and Sega (Genesis) took the lead. Atari attempted a rebirth with its “Jaguar” console but failed miserably.

The demise of Atari’s 2600 console was particularly notable. When it debuted in 1977, the system revolutionized the home game market on its way to selling more than 30 million units.  For a few years, it utterly dominated the console market and the company “rushed out games, assuming that its customers would play whatever it released,” notes New York Times reporters Sam Grobart and Ian Austen. But demand rapidly dried up as other consoles and personal computers took the lead with more powerful, flexible platforms and games. In the end, “millions of unsold games and consoles were buried in a New Mexico landfill in 1983. Warner Communications, which bought Atari in 1976 for $28 million, sold it in 1984 for no cash.”

The next generation of machines was dominated by Nintendo and Sega. But by the turn of the century, more new faces appeared and disrupted the second generation of market leaders. Sony (PlayStation) and Microsoft (Xbox) introduced powerful new consoles that continue to evolve to this day. Both consoles have already cycled through three iterations, each increasingly powerful and more functional. Sega dropped out of the console business and refocused on game development. Nintendo managed to survive with its innovative “Wii” system, but has fallen from its perch as king of the console market. Many also forget Apple’s failed run at the console business with its “Pippin” system in the late 1990s. Steve Jobs killed off the console when he returned to once again lead Apple in 1997. Ironically, just a decade later, with the rise of the iPhone and the Apple App Store, the company would emerge as a major player in the gaming market as smartphone gaming exploded.

Of course, PC gaming existed across these generations and handheld gaming devices and now smartphones are also providing competition to traditional consoles. Arcade games also existed both then and now. Thus, the video game market has always been broader than just home gaming consoles.

Nonetheless, at no time during the turbulent history of this sector have major consoles interoperated. The result has been a constant effort by major console developers to leap-frog the competition with increasingly innovative and powerful consoles and peripherals. Would Microsoft have developed the Kinect motion-sensing device if Nintendo had not previously developed their game-changing Wii motion controllers? It’s impossible to know but it would seem that non-interoperability had something to do with that beneficial development. Microsoft needed a game-changing peripheral of its own to meet the Nintendo challenge since Nintendo was not about to share its innovations with the competition. Meanwhile, Sony has developed its own motion-based “Move” system to compete Microsoft and Nintendo.

This is a highly dynamic marketplace at work. Could policymakers have determined that 3 major non-interoperable home consoles would have produced so much innovation? Would they have judged that to be too much or too little competition?  Would they have been able to foresee or help bring about the disruptive competition from portable gaming devices or smartphones? What sort of interop regulation would have made that happen?

As Palfrey and Gasser suggest in their book, there really “is no single form or optimal amount of interoperability that will suit every circumstance.” The video game case study seems to prove that. Yet, their framework leaves the door open a bit wider for state meddling to determine “optimal interop.” I have little faith that state planners could have given us a more innovative video game marketplace through interop nudging. And I also worry that if the door had been open for regulators at the FCC or elsewhere to influence interoperability decisions, it might have also opened to the door to content regulation since many lawmakers have long had an appetite for video game censorship.

Smartphones

The mobile phone handset and operating system marketplace has undergone continuous change over the past 15 years and is still evolving rapidly. There are some interoperable elements, such as the ability to make connecting calls and send texts and IMs. But other parts of the smartphone ecosystem are not interoperable, such as underlying operating systems or apps and app stores.

In the midst of this mixed system of interoperable and non-interoperable elements, innovation and cut-throat competition have flourished.

When cellular telephone service first started taking off in the mid-1990s, handsets and mobile operating systems were essentially one in the same, and Nokia and Motorola dominated the sector with fairly rudimentary devices. The era of personal digital assistants (PDAs) dawned during this period, but mostly saw a series of overhyped devices, including Apple’s “Newton,” that failed to catch on. In the early 2000s, however, a host of new players and devices entered the market, many of which are still on the scene today, including LG, Sony, Samsung, Siemens, and HTC. Importantly, the sector began splitting into handsets versus operating systems (OS). Leading mobile OS makers have included: Microsoft, Palm, Symbian, BlackBerry (RIM), Apple, and Android (Google).

The sector continues to undergo rapid change and interoperability norms have evolved at the same time. Looking back, it’s hard to know whether increased interoperability would have helped or hurt the state of competition and innovation.

Consider Palm, Blackberry, and Microsoft which all limited interoperability with other systems in various ways. Palm smartphones were wildly popular for a brief time and brought many innovations to the marketplace, for example. Palm underwent many ownership and management changes, however, and rapidly faded from the scene.  After buying Palm in 2010, HP announced it would use its webOS platform in a variety of new products.  That effort failed, however, and HP instead announced it would transition webOS to an open source software development mode.

Similarly, RIM’s BlackBerry was thought to be the dominant smartphone device for a time, but it has recently been decimated. BlackBerry’s rollercoaster ride has left it “trying to avoid the hall of fallen giants” in the words of an early 2012 New York Times headline.  The company once commanded more than half of the American smartphone market but now has under 10 percent, and that number continues to fall.

Microsoft also had a huge lead in licensing its Windows Mobile OS to high-end smartphone handset makers until Apple and Android disrupted its business. It’s hard to believe now, but just a few years ago the idea of Apple or Google being serious contenders in the smartphone business was greeted with suspicion, even scorn by popular handset makers such as Nokia and Motorola. This serves as another classic example of those with a static snapshot mentality disregarding the potential for new entry and technological disruption. Just a few years later, Nokia’s profits and market share have plummeted and a struggling Motorola was purchased by Google. Meanwhile, again, Palm seems dead, BlackBerry is dying, and Microsoft is struggling to win back market share it has lost to Apple and Google in this arena.

It would seem logical to conclude that the ebbs and flows of interoperable and non-interoperable elements of the smartphone world have created a turbulent but vibrantly innovative sector. Has the lack of interoperable operating systems or apps and apps stores hurt smartphone consumers? It’s hard to see how. Mandating interoperability at either level could lead to an OS or app store monopoly, most likely for Apple if such a policy were pursued today.

While Apple has had great success and earned endless kudos for their slick, user-friendly innovations from consumers and tech wonks alike, some critics decry their proprietary business model and more “controlled” user experience. Apple tightly controls almost every level of production of its iPhone smartphone and iPad tablet. Interoperability with competing systems, standards, or technologies is limited in many ways. Is that bad? Some critics think so, suggesting that greater “openness” — presumably in the form of greater device or program interoperability — is needed. But so what? Consumers seem extremely happy with Apple devices. Moreover, well-heeled rivals like Google (Android) and Microsoft continue to innovate at a healthy clip and offer consumers a decidedly different user experience. As with video games consoles, non-interop has had some important dynamic effects and advantages for consumers. It’s hard to know what “optimal interoperability” would even look like in the modern smartphone marketplace and how it would be achieved, but it’s equally hard to believe that consumers would be significantly better off if regulators were trying to achieve it through top-down mandates on such a dynamic, fast-moving market.  [For more on this topic, see my 2011 book chapter, “The Case for Internet Optimism, Part 2 – Saving the Net From Its Supporters,” from the book, The Next Digital Decade.]

Case Study Summary & Analysis

These case studies suggest that defining “optimal interoperability” is a pipe dream. In some cases, consumers demanded a certain amount interoperability and they got it. But it seems equally obvious that they did not demand perfect interoperability in every case. Few consumers are tripping over their own feet in a mad rush to toss out their XBoxs or iPhones just because they are not perfectly interoperable. On the other hand, since the days of the old “walled garden” hell of AOL, CompuServe, Prodigy, and so on, it would seem that information technology markets are growing more “open” in other ways. You can’t completely lock-down a user’s online experience and expect to win their business over the long haul.

Palfrey and Gasser make that point quite nicely in the book:

Increasingly, though, businesses are seeing the merits of strategies based on openness. A growing number of businesses are pursuing models that incorporate interoperability as a core principle. More and more firms, especially in the information business, are shedding their proprietary approaches in favor of interoperability at multiple levels. The goal is not to be charitable to competitors or customers, of course, but to maximize returns over time by building an ecosystem with others that holds greater promise than the go-it-alone approach (p. 149).

Quite right, but let’s not pretend that any mass market information platforms or systems will ever be perfectly “open” or interoperable. There will always be some limitations on how such systems are used or shared. And that’s just fine once you embrace a more flexible theory of evolutionary interoperability.  Ongoing experiments will get us to a better place.

Conclusion: Let Interop Experiments Continue!

So, let me wrap up by restating my alternative theory of optimal interoperability as succinctly as possible: When in doubt, ongoing, bottom-up, dynamic experimentation will almost always yield better answers than arbitrary intervention and top-down planning. Again, that is not to say that all interoperability experiments will leave society better off in the short-term. Some interoperability experiments and resulting market norms or outcomes can create challenging dilemmas for individuals and institutions. There may be short-term spells of “market power,” for example, and some standards may get locked in longer than some of us think makes sense. If, however, we have faith in humans to solve problems with information and technology, then still more experimentation — not state intervention — is the answer. And that is especially true once you accept the fact that those seeking to intervene have very limited knowledge of all the relevant facts needed to even make wise decisions about the future course of technology markets or information systems.

Some will find my alternative theory of optimal interoperability no more satisfying than Palfrey and Gasser’s since they may find the experimental interop framework too inflexible when it comes to state action. Whereas the frustration with Palfrey and Gasser’s theory will likely flow from their failure to define a coherent standard for when intervention is warranted, my approach solves that problem by suggesting we should largely abandon the endeavor and instead let ongoing market experiments solve interop problems over time. For me, we would need to find ourselves in a veritable whole-world-is-about-to-go-to-hell sort of moment before I could go along with state intervention to tip the interop scales in one direction or another. And, generally speaking, this is exactly the sort of thing that antitrust laws are supposed to address after a clear showing of harm to consumer welfare. Stated differently, to the extent any state intervention to address interoperability can be justified, ex post antitrust remedies should almost always trump ex ante regulatory meddling.

This alternative vision of evolutionary, experimental interoperability will be rejected by those who believe the state has the ability to wisely intervene and nudge markets to achieve “optimal interoperability” through some sort of Goldilocks principle that can supposedly get it just right. For those of us who have doubts about the likelihood of such sagacious state action — especially for fast-paced information sectors — the benefits of ongoing marketplace experimentation far outweigh the costs of letting those experiments run their course.

Regardless, we should be thankful that John Palfrey and Urs Gasser have provided us with a book that so perfectly frames what should be a very interesting ongoing debate over these issues. I encourage everyone to pick up a copy of Interop so you can join us in this important discussion.


Additional Reading:

]]>
https://techliberation.com/2012/06/11/what-is-%e2%80%9coptimal-interoperability%e2%80%9d-a-review-of-palfrey-gasser%e2%80%99s-%e2%80%9cinterop%e2%80%9d/feed/ 4 41384
Does the EFF favor government regulation of computer manufacturers? https://techliberation.com/2012/05/31/does-the-eff-favor-government-regulation-of-computer-manufacturers/ https://techliberation.com/2012/05/31/does-the-eff-favor-government-regulation-of-computer-manufacturers/#respond Thu, 31 May 2012 20:04:56 +0000 http://techliberation.com/?p=41329

You won’t find the words ‘government’ or ‘regulation’ in this post at EFF’s blog by Micah Lee and Peter Eckersley. They’re just appealing to Apple’s better angels to drop its closed ways. I’ve explained before why that’s a rational thing to do. But will the EFF assure supporters like me that it will never endorse government enforcement of a “bill of rights” like the one Lee and Eckersley propose today?

What I like about EFF is that it is a pro-liberty group, but I hope I’m not wrong in assuming that they view liberty as I do: as a negative concept. They never come out and say it, but it sure sounds like the authors believe that if Apple doesn’t come around to seeing the virtues of openness and provide an escape hatch, then maybe they should be forced to. I get that impression from passages like this:

When technology and phone companies defend the restrictions that they are imposing on their customers, the most frequent defense they offer is that it’s actually in their customers’ interest to be deprived of liberty: “If we let people do what they want with their pocket computers, they will do stupid things with them. You will be safer and happier in our walled compound than you would be outside.”

Imposing on their customers? Seems to me like the vast majority of Apple’s customers are choosing these restrictions. It’s not Apple that thinks its customers are stupid, and is therefore “imposing” a locked phone on them, it’s Lee and Eckersley who seem to have a low regard for customers’ preferences and want to impose an open device on them.

We can of course debate whether customers are being short-sighted in the choice they’re making, whether the benefits of closed platforms outweigh the costs, and whether we have the best of both worlds right now, but you can’t say that customers are being “deprived of their liberty.” What liberty are they being deprived of? Does the EFF believe there is a positive right to mobile computers that run arbitrary code?

I repeat my plea: Can EFF assure us that it will not support government regulation of computer manufacturers?

]]>
https://techliberation.com/2012/05/31/does-the-eff-favor-government-regulation-of-computer-manufacturers/feed/ 0 41329
Book Review: Infrastructure: The Social Value of Shared Resources, by Brett Frischmann https://techliberation.com/2012/04/25/book-review-infrastructure-the-social-value-of-shared-resources-by-brett-frischmann/ https://techliberation.com/2012/04/25/book-review-infrastructure-the-social-value-of-shared-resources-by-brett-frischmann/#comments Wed, 25 Apr 2012 18:01:00 +0000 http://techliberation.com/?p=40998

The folks at the Concurring Opinions blog were kind enough to invite me to participate in a 2-day symposium they are holding about Brett Frischmann’s new book, Infrastructure: The Social Value of Shared Resources. In my review, I noted that it’s an important book that offers a comprehensive and highly accessible survey of the key issues and concepts, and outlines much of the relevant literature in the field of infrastructure policy.  Frischmann’s book deserves a spot on your shelf whether you are just beginning your investigation of these issues or if you have covered them your entire life. Importantly, readers of this blog will also be interested in the separate chapters Frischmann devotes to communications policy and Net neutrality regulation, as well as his chapter on intellectual property issues.

However, my review focused on a different matter: the book’s almost complete absence of “public choice” insights and Frischmann’s general disregard for thorny “supply-side” questions.  Frischmann is so focused on making the “demand-side” case for better appreciating how open infrastructures “generate spillovers that benefit society as a whole” and facilitate various “downstream productive activities,” that he short-changes the supply-side considerations regarding how infrastructure gets funded and managed. I argue that:

When one begins to ponder infrastructure management problems through the prism of public choice theory, the resulting failures we witness become far less surprising. The sheer scale of many infrastructure projects opens the door to logrolling, rent-seeking, bureaucratic mismanagement, and even outright graft. Regulatory capture is an omnipresent threat, too. . .  any system big enough and important to be captured by special interests and affected parties often will be. Frischmann acknowledges the problem of capture in just a single footnote in the book and admits that “there are many ways in which government failures can be substantial.” (p. 165) But he asks the reader to quickly dispense with any worries about government failure since he believes “the claims rest on ideological and perhaps cultural beliefs rather than proven theory or empirical fact.” (p. 165) To the contrary, decades of public choice scholarship has empirically documented the reality of government failure and its costs to society, as well as the plain old-fashioned inefficiency often associated with large-scale government programs. For infrastructure projects in particular, the combination of these public choice factors usually adds up to massive inefficiencies and cost overruns.

From there I launch into a fuller discussion of public choice insights and outline why it is essential that such considerations inform debates about infrastructure policy going forward. Again, read my entire review here.

]]>
https://techliberation.com/2012/04/25/book-review-infrastructure-the-social-value-of-shared-resources-by-brett-frischmann/feed/ 8 40998
The consequences of Apple’s walled garden https://techliberation.com/2011/11/14/the-consequences-of-apples-walled-garden/ https://techliberation.com/2011/11/14/the-consequences-of-apples-walled-garden/#comments Mon, 14 Nov 2011 16:13:06 +0000 http://techliberation.com/?p=39062

Over at TIME.com, I write about last week’s flap over Apple kicking out famed security researcher Charlie Miller out of its iOS developer program:

So let’s be clear: Apple did not ban Miller for exposing a security flaw, as many have suggested. He was kicked out for violating his agreement with Apple to respect the rules around the App Store walled garden. And that gets to the heart of what’s really at stake here–the fact that so many dislike the strict control Apple exercises over its platform. …

What we have to remember is that as strict as Apple may be, its approach is not just “not bad” for consumers, it’s creating more choice.

Read the whole thing here.

]]>
https://techliberation.com/2011/11/14/the-consequences-of-apples-walled-garden/feed/ 3 39062
Apple’s iCloud Strategy: Lock-In or Consumer Convenience? https://techliberation.com/2011/06/08/apples-icloud-strategy-lock-in-or-consumer-convenience/ https://techliberation.com/2011/06/08/apples-icloud-strategy-lock-in-or-consumer-convenience/#comments Wed, 08 Jun 2011 20:00:13 +0000 http://techliberation.com/?p=37235

Wired’s Brian Chen writes today about the “damage” caused to Apple’s competitors and there own developers by products announced at yesterday’s WWDC keynote, making several claims that are bit dubious, the most suspect of which was this claim about Apple’s new cloud-focused trio:

Now, here’s why iCloud, iOS 5 and Lion pack such a deadly punch against so many companies: Together, they strengthen Apple’s lock-in strategy with vertical integration.

While I don’t doubt that Apple is indeed going to deal a very deadly punch to many competitors with their version of cloud computing for consumers, I think using the term “lock-in” is going to far.  True lock-in would mean driving consumers down a one-way street where their data can’t be moved to another platform (think Facebook prior to late last year) or driving up switching costs through cancellation fees ala the telecom industry.  Apple, on the other hand, is offering consumers a truly compelling user experience, not holding them hostage.

For example, files created in Pages—one of the iCloud-enabled apps—can be exported and uploaded to Google Docs. Similarly, your iTunes downloads can be added to your Amazon or Google music locker.  Meanwhile Apple’s mail and calendar offerings use open standards and that data can also be easily moved to other platforms.

Yet Chen isn’t alone in this lock-in theory, The Wall Street Journal’s Rolfe Winkler has advanced the same thinking in a piece published yesterday:

What makes Apple’s latest product compelling isn’t unique technology; both Amazon.com and Google have Internet-based storage offerings. Rather, it is that Apple is doing more to lock in customers. According to IDC analyst Danielle Levitas, as they surrender more digital property to Apple servers, users become more likely to buy future generations of Apple products. Moving it all is complicated.

Certainly it’s true that moving large amounts of data from one service to another won’t be pretty, but is this a consequence of the specific choices made by Apple, or is this just part of the brave new world of cloud computing?

Aside from a few uber geeks building self-hosted cloud repositories for their own use, the world of all-my-data-everywhere-at-all-times is going to be facilitated by companies offering increasingly sophisticated and deeply integrated ways of making use of your digital detritus.  Users are bound to become accustomed to one set of visual vocabularies, functionalities, and workflows and some may find it hard to move to a competitor as a result.  But unless authorities ban product integration and force consumers to construct their own data syncing solutions piece-by-piece, these rather minor switching costs seem inevitable.

Open standards and low-commitment software-as-a-service offerings make the web an appealing place for geeks who always want the coolest, most advanced bit of software.  But, to the average consumer, getting all of those web-based widgets to work together is a daunting and often perplexing task.  If Apple can solve that consumer pain, that’s a win, not a loss—even if it falls short of the geek ideal.

]]>
https://techliberation.com/2011/06/08/apples-icloud-strategy-lock-in-or-consumer-convenience/feed/ 2 37235
Is Digital Utopianism Dead? And Other Questions https://techliberation.com/2011/03/10/is-digital-utopianism-dead-and-other-questions/ https://techliberation.com/2011/03/10/is-digital-utopianism-dead-and-other-questions/#respond Thu, 10 Mar 2011 07:04:23 +0000 http://techliberation.com/?p=35514

What I hoped would be a short blog post to accompany the video from Geoff Manne and my appearances this week on PBS’s “Ideas in Action with Jim Glassman” turned out to be a very long article which I’ve published over at Forbes.com.

I apologize to Geoff for taking an innocent comment he made on the broadcast completely out of context, and to everyone else who chooses to read 2,000 words I’ve written in response.

So all I’ll say here is that Geoff Manne and I taped the program in January, as part of the launch of TechFreedom and of “The Next Digital Decade.”   Enjoy!

 

 

]]>
https://techliberation.com/2011/03/10/is-digital-utopianism-dead-and-other-questions/feed/ 0 35514
How closed is Apple anyway? https://techliberation.com/2010/11/04/how-closed-is-apple-anyway/ https://techliberation.com/2010/11/04/how-closed-is-apple-anyway/#comments Thu, 04 Nov 2010 17:15:12 +0000 http://techliberation.com/?p=32835

Anyone who knows me will attest to my status as an Apple fanboy. (I type this on my new 11″ MacBook Air, which I managed to resist purchasing for a full week after it was announced.) Hopefully they’ll also attest to my ability to put consumer preference aside when considering logical arguments because today I want to suggest to you that Apple’s business strategy is good for the open internet.

Apple has come under fire by some supporters of an open internet and open software platforms such as Jonathan Zittrain and Tim Wu, who argue that Apple’s walled garden approach to devices and software will lead us to a more controlled and less innovative world. In particular, they point to the app store and Apple’s zealous control over what apps consumers are allowed to purchase and run on their devices. Here’s the thing, though: Every Apple device comes with a web browser. A web browser is an escape hatch from Apple’s walled garden. And Apple has taken a backseat to no one in nurturing an open web. Consider this:

  • Apple created and open-sourced Webkit, arguably the most modern and standards-compliant web rendering engine now available. It serves as the basis for the Safari and Google Chrome browsers on desktops and the iPhone, Android, WebOS, and Blackberry browsers on mobile devices. Why is that important? Because its strict adherence to HTML5 and related standards has allowed developers to make cross-platform applications (Like Google Docs and GMail) without worrying about proprietary extensions like those of Microsoft and Adobe. In fact, Webkit’s success is in large part responsible for Explorer’s decline and pressure on Microsoft to become more standards compliant.

  • Apple’s war on Flash has often been portrayed as evidence of Apple’s domineering attitude, but in fact it can be seen as a victory for the open web. Flash, after all, is a closed proprietary technology. Apple’s refusal to include Flash in its mobile devices (and now Macs) not only makes for better devices since Flash is crashy, a CPU and battery hog, and a perennial security risk, but has also incentivized developers to move to HTML5, CSS, and JavaScript for their web applications. In fact, Adobe has been promoting tools that help convert their Flash applications to HTML5. Microsoft has similarly been backing away from its Flash competitor Silverlight in favor of open standards.

Will Apple ever see the open web as a threat to its walled garden? I’m not sure why they would. You’re still going to need a device to take advantage of web apps, and Apple is in the business of selling devices. What Apple does care about is making sure the web runs on open standards, so that they can’t be locked out and so that the web experience is no better on any other platform. If they can make sure that’s the case, then they can compete on another margin, namely what they’re good at: excellent devices and their vertical, integrated, curated software and media ecosystem.

Now, that strategy didn’t work for AOL. If you could get the web anywhere, why would you pay extra for curated Time-Warner content? I think there are differences. The web was an afterthought for AOL and it showed, and what AOL was offering for a premium was not very different from what was available for free on the web. But whether it works out for Apple or not, it’s closed business model is not only perfectly compatible with an open and “generative” web, but it’s in Apple’s interest to foster it and we’ve seen them do just that.

]]>
https://techliberation.com/2010/11/04/how-closed-is-apple-anyway/feed/ 15 32835
Boycotting Apple is not irrational https://techliberation.com/2010/11/04/boycotting-apple-is-not-irrational/ https://techliberation.com/2010/11/04/boycotting-apple-is-not-irrational/#comments Thu, 04 Nov 2010 15:55:30 +0000 http://techliberation.com/?p=32829

Last week’s episode of Econtalk featured Russ Roberts talking to Tom Hazlett about Apple vs. Google and open vs. closed business models. Tim Lee has already addressed some concerns about Russ and Tom’s treatment of the topic, which I won’t rehash here. But I did want to comment on this statement by Russ (at minute 33):

The idea that you shouldn’t buy Apple stuff, which I’ve actually seen people say, because it’s somehow immoral because [Steve Jobs] is so controlling, is a bizarre idea. I’m not quite sure where it comes from. It comes from some of the freedom of the internet and the stuff we’ve become accustomed to.

Russ then likens a personal conviction to avoid closed products to some of his readers’ feelings of entitlement that they have a right to post a comment on his blog, and to a stranger thinking he has the right to take hot dogs from Russ’s backyard grill. I don’t think I have to explain why these analogies don’t hold up. What I would like to point out is that abstaining from certain products on moral grounds (and even hectoring friends to do the same) is not at all bizarre behavior. We see it all the time by animal lovers who won’t buy leather or products tested on animals, or people who avoid buying diamonds from conflict areas. I’m sure there are products Russ wouldn’t buy on moral grounds.

So if you honestly believe (and I don’t) that patronizing Apple will help contribute to the closing of the Internet, and you value that openness, especially for political reasons, you would be acting perfectly rationally by boycotting Apple. And such an act would have nothing to do with anti-capitalism because, as Tom Hazlett points out, open business models are perfectly compatible with capitalism.

Now stay tuned. In another post later today I’ll suggest why in fact Apple may be good for the open internet.

]]>
https://techliberation.com/2010/11/04/boycotting-apple-is-not-irrational/feed/ 4 32829
DIY News and Commentary https://techliberation.com/2010/10/13/diy-news-and-commentary/ https://techliberation.com/2010/10/13/diy-news-and-commentary/#comments Wed, 13 Oct 2010 05:39:30 +0000 http://techliberation.com/?p=32320

What a delight it has been to watch the rescue of the Chilean miners on a live feed, without commentary from any plasticized, blathering “news reporter.” Of course, there are editorial judgments being made by the camera crews and on-scene director, but it is refreshing to make my own judgments based on what I see happening and what I see on the faces of the miners, their wives, and standers-by.

As my friend, the curmudgeonly @derekahunter notes, “There’s really nothing worse than listening to a reporter attempting to fill time while waiting for something to happen.”

Meanwhile, I’ve been chasing down some intemperate commentary on Twitter about the recent discovery of explosives in a New York cemetery. One Fred Burton, identified on his Twitter feed as Vice President of Intelligence for STRATFOR and a former counter-terrorism agent, Tweeted at the time that these explosives seemed like “a classic dead drop intended for an operative.”

But now we know the explosives are old, they were dug up and laid aside in May or June of 2009, and someone recently found them and decided to report them. That is not consistent with a dead drop, and Burton was wrong to speculate as he did, starting an Internet rumor that needlessly propagates fear.

As a public service, I’m doing a little bit to cut into Burton’s credibility, which should cause him to think twice next time. The winning Tweet is not mine, though. It’s @badbanana’s: “Military-grade explosives found at NYC cemetery. Hundreds confirmed dead.”

In summary, it’s a do-it-yourself news and commentary night. I’m making my world and re-making yours (just a tiny bit), rather than all of us sitting around being fed what to think.

]]>
https://techliberation.com/2010/10/13/diy-news-and-commentary/feed/ 1 32320
The end of software ownership https://techliberation.com/2010/09/20/the-end-of-software-ownership/ https://techliberation.com/2010/09/20/the-end-of-software-ownership/#comments Mon, 20 Sep 2010 14:57:51 +0000 http://techliberation.com/?p=31870

My article for CNET this morning, “The end of software ownership…and why to smile,” looks at the important decision a few weeks ago in the Ninth Circuit copyright case, Vernor v. Autodesk.  (See also excellent blog posts on Eric Goldman’s blog. Unfortunately these posts didn’t run until after I’d finished the CNET piece.)

The CNET article took the provocative position that Vernor signals the eventual (perhaps imminent) end to the brief history of users “owning” “copies” of software that they “buy,” replacing the regime of ownership with one of rental.  And, perhaps more controversially still, I try to make the case that such a dramatic change is in fact not, as most commentators of the decision have concluded, a terrible loss for consumers but a liberating victory.

I’ll let the CNET article speak for itself.  Here I want to make a somewhat different point about the case, which is that the “ownership” regime was always an aberration, the result of an unfortunate need to rely on media to distribute code (until the Internet) coupled with a very bad decision back in 1976 to extend copyright protection to software in the first place.

The Vernor Decision, Briefly

First, a little background.

The Vernor decision, in brief, took a big step in an on-going move by the federal courts to allow licensing agreements to trump user rights reserved by the Copyright Act.  In the Vernor case, the most important of those rights was at issue:  the right to resell used copies.

Vernor, an eBay seller of general merchandise, had purchased four used copies of an older version of AutoCAD from a small architectural firm at an “office sale.”

The firm had agreed in the license agreement not to resell the software, and had reaffirmed that agreement when it upgraded its copies to a new version of the application.  Still, the firm sold the media of the old versions to Vernor, who in turn put them up for auction on eBay.

Autodesk tried repeatedly to cancel the auctions, until, when Vernor put the fourth copy up for sale, eBay temporarily suspended his account.  Vernor sued Autodesk, asking the court for a declaratory judgment (essentially a preemptive lawsuit) that as the lawful owner of a copy of AutoCAD, he had the right to resell it.

A lower court agreed with Vernor, but the Ninth Circuit reversed, and held that the so-called “First Sale Doctrine,” codified in the Copyright Act, did not apply because the architectural firm never bought a “copy” of the application.  Instead, the firm had only paid to use the software under a license from Autodesk, a license the firm had clearly violated.  Since the firm never owned the software, Vernor acquired no rights under copyright when he purchased the disks.

The Long Arm of Vernor?

This is an important decision, since all commercial software (and even open source and freeware software) is enabled by the producer only on condition of acceptance by the user of a license agreement.

These days, nearly all licenses purport to restrict the user’s ability to resell the software without permission from the producer.  (In the case of open source software under the GPL, users can redistribute the software so long as they repeat the other limits, including the requirement that modifications to the software also be distributed under the GPL.)  Thus, if the Vernor decision stands, used markets for software will quickly disappear.

Moreover, as the article points out, there’s no reason to think the decision is restricted just to software.  The three-judge panel suggested that any product—or at least any information-based product—that comes with a license agreement is in fact licensed rather than sold.  Thus, books, movies, music and video games distributed electronically in software-like formats readable by computers and other devices are probably all within the reach of the decision.

Who knows?  Perhaps Vernor could be applied to physical products—books, toasters, cars—that are conveyed via license.  Maybe before long consumers won’t own anything anymore; they’ll just get to use things, like seats at a movie theater (the classic example of a license), subject to limits imposed—and even changed at will—by the licensor.  We’ll become a nation of renters, owning nothing.

Well, not so fast.  First of all, let’s note some institutional limits of the decision.  The Ninth Circuit’s ruling applies only within federal courts of the western states (including California and Washington, where this case originated).  Other circuits facing similar questions of interpretation may reach different or even opposite decisions.

Vernor may also appeal the decision to the full Ninth Circuit or even the U.S. Supreme Court, though in both cases the decision to reconsider would be at the discretion of the respective court.  (My strong intuition is that the Supreme Court would not take an appeal on this case.)

Also, as Eric Goldman notes, the Ninth Circuit already has two other First Sale Doctrine cases in the pipeline.  Other panels of the court may take a different or more limited view.

For example, the Vernor case deals with a license that was granted by a business (Autodesk) to another business (the architectural firm).  But courts are often hesitant to enforce onerous or especially one-sided terms of a contract (a license is a kind of contract) between a business and an individual consumer.  Consumers, more than businesses, are unlikely to be able to understand the terms of an agreement, let alone have any realistic expectation of negotiating over terms they don’t like.

Courts, including the Ninth Circuit, may decline to extend the ruling to other forms of electronic content, let alone to physical goods.

The Joy of Renting

So for now let’s take the decision on its face:  Software licensing agreements that say the user is only licensing the use of software rather than purchasing a copy are enforceable.  Such agreements require only a few “magic words” (to quote the Electronic Frontier Foundation’s derisive view of the opinion) to transform software buyers into software renters.  And it’s a safe bet that any existing End User Licensing Agreements (EULAs) that don’t already recite those magic words will be quickly revised to do so.

(Besides EFF, see scathing critiques of the Vernor decision at Techdirt and Wired.)

So.  You don’t own those copies of software that you thought you purchased.  You just rent it from the vendor, on terms offered on a take-it-or-leave-it basis and subject to revision at will.  All those disks sitting in all those cardboard albums sitting on a shelf in your office are really the property of Microsoft, Intuit, Activision, and Adobe.  You don’t have to return them when the license expires, but you can’t transfer ownership of them to someone else because you don’t own them in the first place.

Well, so what?  Most of those boxes are utterly useless within a very short period of time, which is why there never has been an especially robust market for used software.  What real value is there to a copy of Windows 98, or last year’s TurboTax, or Photoshop Version 1.0?

Why does software get old so quickly, and why is old software worthless?  To answer those questions, I refer in the article to an important 2009 essay by Kevin Kelly.  Kelly, for one, thinks the prospect of renting rather than owning information content is not only wonderful but inevitable, and not because courts are being tricked into saying so.  (Kelly’s article says nothing about the legal aspects of ownership and renting.)

Renting is better for consumers, Kelly says, because ownership of information products introduces significant costs and absolutely no benefits to the consumer.  Once content is transformed into electronic formats, both the media (8-track) and the devices that play them (Betamax) grow quickly obsolete as technology improves under the neutral principle of Moore’s Law.  So if you own the media you have to store it, maintain it, catalog it and, pretty soon, replace it.  If you rent it, just as any tenant, those costs are borne by the landlord.

Consumers who own libraries of media find themselves regularly faced with the need to replace them with new media if they want to take advantage of the new features and functions of new media-interpreting devices.  You’re welcome to keep the 78’s that scratch and pop and hiss, but who really wants to?  Nostalgia only goes so far, and only for a unique subset of consumers.  Most of us like it when things get better, faster, smaller, and cheaper.

In the case of software, there’s the additional and rapid obsolescence of the code itself.  Operating systems have to be rewritten as the hardware improves and platforms proliferate.  Tax preparation software has to be replaced every year to keep up with the tax code.  Image manipulation software gets ever more sophisticated as display devices are radically improved.

Unlike a book or a piece of music, software is only written for the computer to “read” in the first place.  You can always read an old book, whether you prefer the convenience of a mass storage device such as a Kindle.  But you could never read the object code for AutoCAD even if you wanted to—the old version (which got old fast, and not just to encourage you to buy new versions) is just taking up space in your closet.

The Real Crime was Extending Copyright to Software in the First Place

In that sense, it never made any sense to “own” “copies” of software in the first place.  That was only the distribution model for a short time, necessitated by an unfortunate technical limit of computer architecture that has nearly disappeared.  CPUs require machine-readable code to be moved into RAM in order to be executed.

But core memory was expensive.  Code came loaded on cheap tape, which was then copied to more expensive disks, which was then read into even more expensive memory.  In a perfect world with unlimited free memory, the computer would have come pre-loaded with everything.

That wouldn’t have solved the obsolescence problem, however.  But the Internet solved that by eliminating the need for the physical media copies in the first place.  Nearly all the software on my computer was downloaded in the first place—if I got a disk, it was just to initiate the download and installation.  (The user manual, the other component of the software album, is only on the disk or online these days.)

As we move from physical copies to downloaded software, vendors can more easily and more quickly issue new versions, patches, upgrades, and added functionality (new levels of video games, for example).

And, as we move from physical copies to virtual copies residing in the cloud, it becomes increasingly less weird to think that the thing we paid for—the thing that’s sitting right there, in our house or office—isn’t really ours at all, even though we paid for, bagged it, transported and unwrapped it just as do all the other commodities that we do own.

That’s why the Vendor decision, in the end, isn’t really all that revolutionary.  It just acknowledges in law what has already happened in the market.  We don’t buy software.  We pay for a service—whether by the month, or by the user, or by looking at ads, or by the amount of processing or storage or whatever we do with the service—and regardless of whether the software that implements the service runs on our computer or someone else’s, or, for that matter, everyone else’s.

The crime here, if there is one, isn’t that the courts are taking away the First Sale Doctrine.  It’s not, in other words, that one piece of copyright law no longer applies to software.  The crime is that copyright—any part of it—every applied to software in the first place.  That’s what led to the culture of software “packages” and “suites” and “owning copies” that was never a good fit, and which now has become more trouble than it’s worth.

Remember that before the 1976 revisions to the Copyright Act, it was pretty clear that software wasn’t protected by copyright.  Until then, vendors (there were very few, and, of course, no consumer market) protected their source code either by delivering only object code and/or by holding user’s to the terms of contracts based on the law of trade secrets.

That regime worked just fine.  But vendors got greedy, and took the opportunity of the 1976 reforms to lobby for extension of copyright for source code.  Later, they got greedier, and chipped away at bans on applying patent law to software as well.

Not that copyright or patent protection really bought the vendors much.  Efforts to use it to protect the “look and feel” of user interfaces, as if they were novels that read too closely to an original work, fell flat.

Except when it came to stopping the wholesale reproduction and unauthorized sale of programs in other countries, copyright protection hasn’t been of much value to vendors.  And even then the real protection for software was and remains the rapid revision process driven by technological, rather than business or legal, change.

But the metaphor equating software with novels had unintended consequences.  With software protected by copyright, users—especially consumers—became accustomed to the language of copies and ownership and purchase, and to the protections of the law of sales, which applies to physical goods (books) and not to services (accounting).

So, if consumer advocates and legal scholars are enraged by the return to a purely contractual model for software use, in some sense the vendors have only themselves—or rather their predecessors—to blame.

But that doesn’t change the fact that software never fit the model of copyright, including the First Sale Doctrine.  Just because source code kind of sort of looked like it was written in a language readable by a very few humans, the infamous CONTU Committee making recommendations to Congress made the leap to treating software as a work of authorship by (poor) analogy.

With the 1976 Copyright Act, the law treated software as if it were a novel, giving exclusive rights to its “authors” for a period of time that is absurd compared to the short economic lifespan of any piece of code written since the time of Charles Baggage and Ada Lovelace.

The farther away from a traditional “work of authorship” that software evolves (visual programming, object-oriented architecture, interpretive languages such as HTML), the more unfortunate that decision looks in retrospect.  Source code is just a convenience, making it easier to write and maintain programs.  But it doesn’t do anything.  It must be compiled or interpreted before the hardware will make a peep or move a pixel.

Author John Hersey, one of the CONTU Committee members, got it just right.  In his dissent from the recommendation to extend copyright to software, Hersey wrote, “software utters work.  Work is its only utterance and its only purpose.”

Work doesn’t need the incentives and protections we have afforded to novels and songs.  And consumers can no more resell work than they can take home their seat from the movie theater after the show.

]]>
https://techliberation.com/2010/09/20/the-end-of-software-ownership/feed/ 5 31870
Open Video Conference, New York City, Oct. 1-2 https://techliberation.com/2010/09/02/open-video-conference-new-york-city-oct-1-2/ https://techliberation.com/2010/09/02/open-video-conference-new-york-city-oct-1-2/#respond Thu, 02 Sep 2010 21:57:20 +0000 http://techliberation.com/?p=31593

I’ll be there, speaking on a privacy-focused panel entitled: “We Know What You Watch.”

Spooky!

There’s an interesting agenda and, as conferences go, this one seems to be pretty well organized. For example, they have a page of badges they encourage participants to use in promotions like this one. (What do you think of the one I selected?)

And they suggest the Twitter hashtags #openvideo and #ovc10.

Once again, New York TLFers, that’s the Open Video Conference, Oct. 1-2 at the Fashion Institute of Technology.

]]>
https://techliberation.com/2010/09/02/open-video-conference-new-york-city-oct-1-2/feed/ 0 31593
“Jailbreaking” Won’t Land You In Jail https://techliberation.com/2010/07/29/jailbreaking-wont-land-you-in-jail/ https://techliberation.com/2010/07/29/jailbreaking-wont-land-you-in-jail/#comments Thu, 29 Jul 2010 17:54:07 +0000 http://techliberation.com/?p=30751

jailbroken phone graphicThe Digital Millenium Copyright Act makes it a crime to circumvent digital rights management technologies but allows the Librarian of Congress to exempt certain classes of works from this prohibition.

The Copyright Office just released a new rulemaking on this issue in which it allows people to “unlock” their cell phones so they can be used on other networks and “jailbreak” closed mobile phone operating systems like the iOS operating system on Apple’s iPhones so that they will run unapproved third-party software.

This is arguably good news for consumers: Those willing to void their warranties so they can teach their phone some new tricks no longer have to fear having their phone confiscated, being sued, or being imprisoned. (The civil and criminal penalties are described in 17 USC 1203 and 17 USC 1204.) Although the new exemption does not protect those who distribute unlocking and/or jailbreaking software (which would be classified under 17 USC 1201(b), and thus outside the exemption of 17 USC 1201(a)), the cases discussed below could mean that jailbreaking phones simply falls outside of the scope of all of the DMCA’s anti-circumvention provisions.

Apple opposed this idea when it was initially proposed by the Electronic Frontier Foundation, arguing that legalizing jailbreaking constituted a forced restructuring of its business model that would result in “significant functional problems” for consumers that could include “security holes and malware, as well as possible physical damage.” But who beyond a small number of geeks brave enough to give up their warranties and risk bricking their devices, is really going to attempt jailbreaking? One survey found that only 10% of iPhone users have jailbroken their phones, and the majority are in China, where the iPhone was not available legally until recently. Is it really likely that giving the tinkering minority the legal right to void their product warranties would cause any harm to the non-tinkering majority that will likely choose to instead remain within a manufacturer’s “walled garden“? I don’t think so. If, as a result of this ruling, large numbers of consumers jailbreak their phones and install pirated software, the Copyright Office can easily reconsider the exemption in its next Triennial Rulemaking.

While the ruling is heartening, it is not surprising. In Chamberlain Group, Inc. v. Skylink Techs., Inc.,  the United States Court of Appeals for the Federal Circuit held that trafficking in a circumvention device violates Section 1201(a)(2) only if the circumvention enables access that “infringes or facilitates infringing a right protected by the Copyright Act.” The Chamberlain case involved unlicensed third-party garage door opener remotes. The Sixth Circuit came to a similar decision in Lexmark International, Inc. v. Static Control Components, Inc., a case involving a software “handshake” between Lexmark printers and Lexmark-branded toner cartridges meant to keep third-party replacement toner cartridges off the market. The Copyright Office’s ruling is just another example of policymakers recognizing that Copyright law exists only to protect copyrighted works, not business models based on excluding access.

But self-help is a two-way street: Companies are, and should be, free to continue using their own “self-help” technical protection measures to prevent (or merely discourage) customers from reverse-engineering their products. This highlights what Larry Lessig describes as the distinction between East Coast Code (laws) and West Coast Code (software). It makes perfect sense for companies to avail themselves of all possible methods (software and laws) to protect their revenue streams, but lawbreakers, by definition, don’t respect laws. Although most technical protection measures have been woefully inadequate to date (see, e.g., 1, 2, 3, 4, 5, to name a few), cryptographically-secure code is much more likely to be effective in the long-term than laws.

While this decision probably doesn’t matter much for the average, non-tinkering consumer, tinkerers will be comforted by the fact that their hobby is no longer a crime, and without the threat of criminal sanctions, there should be more publicization of what the new mobile phones are really capable of. That, in turn, should put additional pressure on phone manufacturers to take off the training wheels and be a bit more open about what apps they allow on their devices.

While Apple is correct in pointing out that some users with jailbroken phones still call Apple’s technical support lines, it is quite impossible to accidentally jailbreak your phone and all of the websites with instructions on how to do so have extensive disclaimers warning about the possible consequences. At some point, consumers should be responsible for their own actions. The Librarian of Congress is willing to give them that responsibility. And whether they want to or not, phone manufacturers will to.

]]>
https://techliberation.com/2010/07/29/jailbreaking-wont-land-you-in-jail/feed/ 3 30751
Eric Frank on Flat World Knowledge https://techliberation.com/2010/07/06/eric-frank-on-flat-world-knowledge/ https://techliberation.com/2010/07/06/eric-frank-on-flat-world-knowledge/#comments Tue, 06 Jul 2010 13:44:02 +0000 http://techliberation.com/?p=30112

On the podcast this week, Eric Frank, co-founder and president of Flat World Knowledge, the leading publisher of commercial, openly licensed college textbooks, discusses the company and its business model, which he compares to that of Red Hat. In the podcast Frank addresses moral hazards of the traditional college textbook publishing model, the company’s genesis, products and services it offers, how it makes money, and why it appeals to students, professors, and authors.

Related Readings

Do check out the interview, and consider subscribing to the show on iTunes. Past guests have included Clay Shirky on cognitive surplus, Nick Carr on what the internet is doing to our brains, Gina Trapani and Anil Dash on crowdsourcing, James Grimmelman on online harassment and the Google Books case, Michael Geist on ACTA, Tom Hazlett on spectrum reform, and Tyler Cowen on just about everything.

So what are you waiting for? Subscribe!

]]>
https://techliberation.com/2010/07/06/eric-frank-on-flat-world-knowledge/feed/ 1 30112
Clay Shirky on Cognitive Surplus https://techliberation.com/2010/06/14/clay-shirky-on-cognitive-surplus/ https://techliberation.com/2010/06/14/clay-shirky-on-cognitive-surplus/#comments Mon, 14 Jun 2010 14:30:13 +0000 http://techliberation.com/?p=29727

On this week’s episode of the podcast, Clay Shirky, adjunct professor at New York University’s Interactive Telecommunications Program, discusses his new book, Cognitive Surplus: Creativity and Generosity in a Connected Age. Shirky talks about social and economic effects of Internet technologies and interrelated effects of social and technological networks.  In this podcast he discusses social production, open source software, Wikipedia, defaults, Facebook, and more.

Related Readings

Do check out the interview, and consider subscribing to the show on iTunes. Past guests have included Nick Carr on what the internet is doing to our brains, Gina Trapani and Anil Dash on crowdsourcing, James Grimmelman on online harassment and the Google Books case, Michael Geist on ACTA, Tom Hazlett on spectrum reform, and Tyler Cowen on just about everything.

So what are you waiting for? Subscribe!

]]>
https://techliberation.com/2010/06/14/clay-shirky-on-cognitive-surplus/feed/ 3 29727
video: Some Thoughts on the Free Culture Debate https://techliberation.com/2010/03/21/video-some-thoughts-on-the-free-culture-debate/ https://techliberation.com/2010/03/21/video-some-thoughts-on-the-free-culture-debate/#comments Sun, 21 Mar 2010 19:26:52 +0000 http://techliberation.com/?p=27327

Andrew Keen recently asked me to sit down and chat with him as part of a new series of video interviews he is conducting for Arts + Labs called “Keen on Media.” You can find the discussions with me here (or on Vimeo here). Keen asked me to talk about a wide variety of issues, but this first video features some thoughts about the tensions between the free culture movement and those that continue to favor property rights and proprietary business models as the foundation of the economy. Consistent with what I have argued in the past, I advocated a mushy middle-ground position of preserving the best of both worlds. I believe that free and open source software has produced enormous social & economic benefits, but I do not believe that it will or should replace all proprietary business models or methods.  Each model or mode of production has its place and purpose and they should continue to co-exist going forward, albeit in serious tension at times.

http://vimeo.com/moogaloop.swf?clip_id=10260819&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1 Adam Thierer (part 1) from andrewkeen on Vimeo.

]]>
https://techliberation.com/2010/03/21/video-some-thoughts-on-the-free-culture-debate/feed/ 8 27327
Net Neutrality Tail Wags Broadband Dog https://techliberation.com/2010/03/11/net-neutrality-tail-wags-broadband-dog/ https://techliberation.com/2010/03/11/net-neutrality-tail-wags-broadband-dog/#comments Fri, 12 Mar 2010 02:56:07 +0000 http://techliberation.com/?p=27034

I published an opinion piece today for CNET arguing against recent calls to reclassify broadband Internet as a “telecommunications service” under Title II of the Communications Act.

The push to do so comes as supporters of the FCC’s proposed Net Neutrality rules fear that the agency’s authority to adopt them under its so-called “ancillary jurisdiction” won’t fly in the courts.  In January, the U.S. Court of Appeals for the D.C. Circuit heard arguments in Comcast’s appeal of sanctions levied against the cable company for violations of the neutrality principles (not yet adopted under a formal rulemaking).  The three-judge panel expressed considerable doubt about the FCC’s jurisdiction in issuing the sanctions during oral arguments.  Only the published opinion (forthcoming) will matter, of course, but anxiety is growing.

Solving the Net Neutrality jurisdiction problem with a return to Title II regulation is a staggeringly bad idea, and a counter-productive one at that.  My article describes the parallel developments in “telecommunications services” and the largely unregulated “information services” (aka Title I) since the 1996 Communications Act, making the point that life for consumers has been far more exciting—and has generated far more wealth–under the latter than the former.

Under Title I, in short, we’ve had the Internet revolution.  Under Title II, we’ve had the decline and fall of basic wireline phone service, boom and bust in the arbitraging competitive local exchange market, massive fraud in the bloated e-Rate program, and the continued corruption of local licensing authorities holding applications hostage for legal and illegal bribes.

But the FCC has not ruled out the idea of reclassification.  Indeed, just as the piece was being published, FCC Chairman Julius Genachowski was testifying before a Senate committee considering Comcast’s proposed merger with NBC Universal.  When asked whether the agency was considering reclassification, the Chairman responded:  “We are defending the position that Title I gives us the authority we need. We’ll continue to assert that position and hope we will get a favorable decision. If the court does something that requires us to reassess, we’ll do that.”

I leave for another day a detailed discussion of whether the FCC could in fact reclassify broadband Internet as a telecommunications service without explicit authority to do so from Congress.  It was the Commission, after all, who argued successfully in the Brand X case that broadband Internet clearly fit the definition of information service.  (I criticized the Brand X case when it was decided as not going far enough, see “Cure for the Common Carrier,” CIO Insight, April 2005)

Under the Chevron Doctrine, the U.S. Supreme Court gives great deference to agencies in the interpretation of their governing statutes.  Brand X held that the two definitions were ambiguous and that the FCC’s resolution of that ambiguity was reasonable.  Under Chevron, that ends the involvement of the courts.  To reclassify broadband, the FCC would have to argue that its interpretation was not in fact reasonable.

Left out for reasons of length was a discussion of the history of the two terms and the source of the definitions given for them in the 1996 Act.  That history demonstrates even more clearly, I think, that regulation of the Internet cannot and should not be governed by Title II.

Title II telecommunications services are subject to the “common carrier” provisions—including unbundling, rate oversight, and the Universal Service Fund—that were created over the years to manage the legal monopoly held by AT&T.  Under the 1913 Kingsbury Agreement, AT&T was able to maintain its control over most aspects of U.S. telecommunications in exchange for accepting government oversight.  In 1934 Congress created the Federal Communications Commission, which addressed itself to regulating AT&T’s interstate business.  State and local authorities handled the intrastate business.

Over the last thirty years, the AT&T monopoly has been largely broken up by a combination of disruptive technologies (alternate transport including cable, satellite, and cellular, and new architectures including the Internet) and government interventions.  The equipment monopoly of AT&T’s Western Electric subsidiary went first, followed by the separation of local and long distance in 1984.  Competition in long distance was followed in 1996 by the forced opening of local services.  What was left of the original AT&T was acquired by some of its former subsidiaries in 2005.

Meanwhile, since the 1950’s the computer revolution has been busily transforming business and life.  In the early days of standalone computers, there was no overlap between computing and telephony, but with the advent of time-sharing and later interactive and distributed computing, private data communications began to develop using much of the infrastructure invented for voice.  A simple—and I think, entirely reasonable—way to distinguish between “information services” and “telecommunications services” is to understand that historically “information services” meant private data communications and “telecommunications services” meant voice.

For the most part, information services have remained outside of the FCC’s regulatory clutches for the simple reason that AT&T, until 1980, was banned from offering data communications.   (It’s a long story, but that ban is the reason Linux—derived from AT&T’s UNIX operating system—is open source.)  Since the regulated monopoly wasn’t involved in the computer or data communications business, there was no basis for subjecting it to common carrier rules.  That’s a fortunate accident of history, of course, because leaving computing unregulated has meant all the difference in its dramatic growth.

The legal border between Title I and Title II goes back at least to 1956 and an earlier attempt to break up AT&T.  The effort failed, but a Faustian bargain was struck.  AT&T and its subsidiaries (including Western Electric and Bell Labs) agreed to stay out of the computer business (“information” or “enhanced services” as it was called at the time) and remained under the regulatory control of the FCC for traditional telephony (“telecommunications services”).  In exchange, the government let AT&T stay together.

At the time, commercial computing was in its infancy.  There was no data communications and certainly no Internet.

In 1980, AT&T was released from its pledge not to offer data services.  But four years later, before they could really do very much with their freedom, the company was broken up by Judge Harold Greene, who then ran the communications industry from his chambers until Congress finally passed the 1996 Act.

In any case, by the 1980’s the computer industry had evolved dramatically.  The provisioning of private data services was a fast-growing business.  IBM, DEC, and other providers had developed proprietary networks and proprietary standards, and used them to lock customers into their hardware and software products.

All of that had changed by 1996.  For the first time, it was not only businesses but consumers who were using information services.  Public data networks operated by America On-Line, CompuServe and Prodigy had millions of subscribers and were growing rapidly.  Netscape Navigator unleashed the full potential of the World Wide Web as a non-proprietary networking standard, creating new industries and services which continue to evolve at a staggering pace.  The protocols that made up the Internet had shifted from a government and university use to public use.

Given this history and the great uncertainty in 1996 as to where commercial and consumer computing was headed, Congress decided that the fast-growing provision of data communications should remain outside the control of the FCC.  Hence “information services” remained unregulated and “telecommunications services” remained regulated.  The terms and their definitions in the 1996 Act were lifted, for better and for worse, straight out of the 1984 decree in the AT&T antitrust case.

(Much of this history is recounted in detail by Prof. Susan Crawford in a 2009 article in the Boston University Law Review, “Transporting Communications.”   Though I don’t share Prof. Crawford’s conclusions, the story is told quite well.)

One could argue (I have) that the need for Title II regulations even as applied to wireline voice communications (what’s known in the industry as POTS – Plain Old Telephone Service) makes little sense now that nearly everything has converged under the Internet—including voice.  Or put another way, the continued application of common carrier rules are introducing more costs to consumers than any inefficiencies that might exist in an unregulated world.

But it’s much harder to make the case that we should actually move data communications over to Title II in addition to POTS, especially if the only reason behind doing so is to salvage the Net Neutrality rulemaking.  To mix some metaphors, that’s both the tail wagging the dog and killing goose that lays the golden eggs.  It also flies in the face of the history of the two titles, the consumer experience of life under Title I, and common sense.

]]>
https://techliberation.com/2010/03/11/net-neutrality-tail-wags-broadband-dog/feed/ 6 27034
Two Cheers for the Treasury Department on Internet Freedom! https://techliberation.com/2010/03/08/two-cheers-for-the-treasury-department-on-internet-freedom/ https://techliberation.com/2010/03/08/two-cheers-for-the-treasury-department-on-internet-freedom/#comments Mon, 08 Mar 2010 22:46:47 +0000 http://techliberation.com/?p=26926

The Treasury Department today announced that it would grant the State Department’s December request (see the Iran letter here) for a waiver from U.S. embargoes that would allow Iranians, Sudanese and Cubanese to download “free mass market software … necessary for the exchange of personal communications and/or sharing of information over the internet such as instant messaging, chat and email, and social networking.”

I’m delighted to see that the Treasury Department is implementing Secretary Clinton’s pledge to make it easier for citizens of undemocratic regimes to use Internet communications tools like e-mail and social networking services offered by US companies (which Adam discussed here). It has been no small tragedy of mindless bureaucracy that our sanctions on these countries have actually hampered communications and collaboration by dissidents—without doing anything to punish oppressive regimes. So today’s announcement is a great victory for Internet freedom and will go a long way to bringing the kind of free expression we take for granted in America to countries like Iran, Sudan and Cuba.

But I’m at a loss to explain why the Treasury Department’s waiver is limited to free software. The U.S. has long objected when other countries privilege one model of software development over another—and rightly so: Government should remain neutral as between open-source and closed-source, and between free and paid models. This “techno-agnosticism” for government is a core principle of cyber-libertarianism: Let markets work out the right mix of these competing models through user choice!

Why should we allow dissidents to download free “Web 2.0” software but not paid ones? Not all mass-market tools dissidents would find useful are free. Many “freemium” apps, such as Twitter client software, require purchase to get full functionality, sometimes including privacy and security features that are especially useful for dissidents. To take a very small example that’s hugely important to me as a user, Twitter is really only useful on my Android mobile phone because I run the Twidroid client. But the free version doesn’t support multiple accounts or lists, which are essential functions for a serious Tweeter. The Pro version costs just $4.89—but if I lived in Iran, U.S. sanctions would prevent me from buying this software. More generally, we just don’t know what kind of innovative apps or services might be developed that would be useful to dissidents, so why foreclose the possibility of supporting them through very small purchases?

If Treasury is worried about creating a loophole that could allow evasion of U.S. sanctions, surely there are better ways to prevent such abuse than simply continuing to ban even small software purchases, especially since the purchase price for freemium apps is often just a few dollars. Or the U.S. Government could even negotiate a blanket license for all downloads from embargoed countries with software developers to ensure that our export controls do not deny dissidents the best tools available.

The practictioners at Steptoe & Johnson asked some good questions about this proposal back in December when State sent their request to Treasury.

]]>
https://techliberation.com/2010/03/08/two-cheers-for-the-treasury-department-on-internet-freedom/feed/ 3 26926
Open Source and Auto Safety https://techliberation.com/2010/02/22/open-source-and-auto-safety/ https://techliberation.com/2010/02/22/open-source-and-auto-safety/#respond Mon, 22 Feb 2010 15:19:24 +0000 http://techliberation.com/?p=26342

Tim Lee points to “The Toyota Recall and the Case for Open, Auditable Source Code.”

Knowing how the technology in our cars work is not just a safety issue, but a privacy issue—and maybe even a tax issue.

]]>
https://techliberation.com/2010/02/22/open-source-and-auto-safety/feed/ 0 26342
Bailout for the First Amendment vs. Preservation of Competing Biases https://techliberation.com/2010/02/17/bailout-for-the-first-amendment-vs-preservation-of-competing-biases/ https://techliberation.com/2010/02/17/bailout-for-the-first-amendment-vs-preservation-of-competing-biases/#comments Wed, 17 Feb 2010 17:02:03 +0000 http://techliberation.com/?p=26201

Clearly many groups contend there’s a “crisis” in journalism, even to the extent of advocating government support of news organizations, despite the dangers inherent in the concept of government-funded ideas and their impact on critique and dissent. 

Georgetown is hosting a conference today called “The Crisis In Journalism: What should Government Do,” (at which Adam Thierer is speaking), with the defining question, “How can government entities, particularly the Federal Trade Commission and the Federal Communications Commission, help to form a sustainable 21st century model for journalism in the United States?”

We actually resolved the question of “What Government Should Do,” We actually resolved the question of “What Government Should Do,” in a manner that influenced the entire world, with passage of the Bill of Rights and its First Amendment. The Constitution was ratified by nine states on June 21, 1788.  Georgetown, your conference host, was founded January 23, 1789.  As far as I can tell, Georgetown didn’t hold a “Crisis In Journalism” conference that week, even though there was little national media industry to speak of and thus much more of a prevailing crisis situation than today, when you stop and think of it. 

Then the Bill of Rights was ratified on December 15, 1791–and still no Georgetown conference. Amazingly, at the time, our ancestors thought it appropriate for the federal government to establish a First Amendment and step aside, even though there were no TVs or radios, or Internet and websites, iPods, or stories broken by Twitter.  There wasn’t even an FCC yet to ponder a “sustainable 19th century model for journalism in the United States.” 

Media at that time barely existed compared to what we have today. Yet there was no crisis.  Nor is there a crisis today.

What this feigned crisis signifies is, on the one hand, pure indulgence of a wealthy society struggling with “creative destruction” in media; and, on the other, the desire for more political control of information flows and public opinion rather than enshrinement of the only condition appropriate to a free society—the preservation of competing biases.  These are ultimately far more important than pretended objectivity in both the preservation of our liberties and in the creation of “information wealth.”  Too often, the class interest of intellectuals is statism (continue reading Schumpeter for details, I’m not doing it here);  and the journalism industry, far from alone among myriad economic endeavors, is highly vulnerable to those same collectivist impulses. 

Convincing the public and policymakers that media is in crisis is essential for progressives to maintain influence now: while progressives long since successfully established government agencies with broad political control over communications, today they find themselves desperate to maintain that slipping control in the era in which media abundance undermines those agencies’ very reason for being.

Outrageous and demeaning calls for public funding of journalism, public spaces, information commons, artificial “crises” and other such manipulative indulgences draw their energy from the flawed premise that capitalism and freedom are inimical to civil society and the diffusion of ideas, when they are instead the prerequisites.  America established a First Amendment precisely because government and political machinery can threaten these precious values. Competition in creation of goods and services creates tangible wealth; competition in creation of ideas (including scientific research, yet another notion to discuss later) ultimately does the same and enhances liberties. Government funding removes the element of competition, on purpose.

This crisis is phony, except obviously for the specific businesses that are being upended.  Media, information, journalism, whatever it gets called, can only be irreparably damaged by censorship, the only crisis to which journalism is ever vulnerable.  But it also counts as censorship if progressives control information or succeed in funding it politically and prevent proprietary business models in the content, reporting and infrastructure of the future.  A Bailout for the First Amendment is catastrophic policy, even if it’s advocates’ stated goals are merely to make us all enlightened (somewhat left-leaning?) citizens.

(Hat tip to my colleague Alex Nowrasteh for helping me find ratification dates.)

]]>
https://techliberation.com/2010/02/17/bailout-for-the-first-amendment-vs-preservation-of-competing-biases/feed/ 2 26201