March 2021

Post image for Another NFT Explainer

Another NFT Explainer

by on March 29, 2021 · 0 comments

I don’t understand the hype surrounding Non-Fungible Tokens (NFTs). As someone who has studied copyright and technology issues for years, maybe because it doesn’t seem very new to me. It’s just a remixing of some ideas and technologies that have been around for decades. Let me explain.

For at least 100 years, “ownership” of real property has been thought of as a “bundle of rights.” As a simple example, you may “own” the land your house sits on, but the city probably has a right to build and maintain a sidewalk across your yard and the general public has a right to walk across your property on that sidewalk. The gas company has the right to walk into your side yard to read your gas meter. Pilots have a right to fly over your house. Some other company or companies may have rights to any water and minerals in the ground below your house. Your homeowners association may even have a right to dictate what color you paint the exterior of your house.

This same “bundle of rights” concept also applies to copyright. Unless explicitly granted by contract, buying an original painting doesn’t mean you have the right to take a photograph of the painting and sell prints of the photograph. If you buy a DVD, you have the right to watch the DVD privately and you have the right to sell the DVD when you’re no longer interested in it. (That second right is called the “first sale doctrine” and there have been numerous Supreme Court cases and laws defining it’s exact boundaries.) But unless explicitly granted by contract, purchasing a DVD doesn’t mean you have the right to set up a projector and big screen and charge members of the public to watch it. That requires a “public performance” right.

When you buy most NFTs, you get very few of the rights that typically come with ownership. You might only get the right to privately exhibit the underlying work. And if you decide to later resell the NFT, the contract (which is embedded in digital code of the NFT) may stipulate that the original artist gets a 10% royalty on every future sale of the work.

The second thing you need to understand is the concept of “artificial scarcity.” As a simple example, in the art world, it’s common for photographers and painters to sell numbered, “limited edition” prints of their works. There’s no technological reason why they couldn’t print 1,000 copies of their work, or even register the print with a “print on demand” service that will continue making and selling prints as long as there are people who want to buy them. But limiting the number of prints made (even if each print is identical to any other print), is likely to raise the price. This is artificial scarcity. Most NFTs are an edition of one. Even if there are other exact copies of the underlying artwork sold as NFTs, each NFT is unique. This is like an artist selling numbered prints but not putting a limit on how many numbered prints they make. Each numbered print is technically unique because each has a different number. But without some artificial scarcity, the value of any one print may stay very low.

So if buying a NFT doesn’t get you any real rights and the scarcity is purely artificial, why are NFTs selling for hundreds of thousands of dollars? Here’s where all the technology really makes a difference. If you spend millions on a Picasso painting, you’re taking a lot of risks. First, you’re taking the risk that it’s a forgery, which would drop the value to near-zero. Second is the risk that the painting will be stolen from you. Insurance can help deal with both problems, but that adds more complications. If you’re buying the painting as an investment, these complications reduce the “liquidity” of the asset. Liquidity is the ease with which an asset can be converted into cash without affecting the market value of the asset. Put more simply, liquidity is how easily the asset can be sold. Cash has long been considered the most liquid asset, but NFTs are arguably much /more/ liquid than cash. NFTs don’t require anything physical to trade hands. And even electronic currency transfers take time and are subject to government oversight. NFTs are so new, they’re barely regulated. But by using blockchain technology, they can be easily and safely bought and sold anonymously. NFTs are a money launderers dream. It’s unclear if NFTs are actually being used to launder money, but it’s a concern.

The other reason I think NFTs are so popular is speculation. Because NFTs are so liquid and because there basically doesn’t even need to be an underlying work, the initial cost to “mint” (create) a NFT is near zero. And by using blockchain systems, NFTs can be resold with little overhead. (Though they can also be configured to ensure a certain overhead, e.g. that 10% of every resale goes to the original artist.) These characteristics, along with the newness of NFTs make it a popular marketplace for speculators, people who purchase assets with the intent of holding them for only a short time and then selling them for a profit.

NFTs started to enter the public consciousness in February 2021, after the 10-year old “Nyan cat” animation sold for over half a million dollars. This is also just a few weeks after the Gamestop stock short squeeze made a compelling case that average investors, working in concert, could upset the stock market and make millions. So it’s no wonder that there is rampant speculation in NFTs.

In conclusion, NFTs will be a tremendous benefit to digital artists, who did not previously have a way to easily prove the authenticity of their works (which is of tremendous importance to investors) or to provide a digital equivalent to numbered prints in the physical art world. But the hype about NFTs is just that. It’s driven by speculators and you’d be crazy to think of this as a worthy investment opportunity.

Content moderation online is a newsworthy and heated political topic. In the past year, social media companies and Internet infrastructure companies have gotten much more aggressive about banning and suspending users and organizations from their platforms. Today, Congress is holding another hearing for tech CEOs to explain and defend their content moderation standards. Relatedly, Ben Thompson at Stratechery recently had interesting interviews with Patrick Collison (Stripe), Brad Smith (Microsoft), Thomas Kurian (Google Cloud), and Matthew Prince (Cloudflare) about the difficult road ahead re: content moderation by Internet infrastructure companies.

I’m unconvinced of the need to rewrite Section 230 but like the rest of the Telecom Act—which turned 25 last month–the law is showing its age. There are legal questions about Internet content moderation that would benefit from clarifications from courts or legal scholars.

(One note: Social media common carriage, which some advocates on the left, right, and center have proposed, won’t work well, largely for the same reason ISP common carriage won’t work well—heterogeneous customer demands and a complex technical interface to regulate—a topic for another essay.)

The recent increase in content moderation and user bans raises questions–for lawmakers in both parties–about how these practices interact with existing federal laws and court precedents. Some legal issues that need industry, scholar, and court attention:

Public Officials’ Social Media and Designated Public Forums

Does Knight Institute v. Trump prevent social media companies’ censorship on public officials’ social media pages?

The 2nd Circuit, in Knight Institute v. Trump, deemed the “interactive space” beneath Pres. Trump’s tweets a “designated public forum,” which meant that “he may not selectively exclude those whose views he disagrees with.” For the 2nd Circuit and any courts that follow that decision, the “interactive space” of most public officials’ Facebook pages, Twitter feeds, and YouTube pages seem to be designated public forums.

I read the Knight Institute decision when it came out and I couldn’t shake the feeling that the decision had some unsettling implications. The reason the decision seems amiss struck me recently:

Can it be lawful for a private party (Twitter, Facebook, etc.) to censor members of the public who are using a designated public forum (like replying to President Trump’s tweets)? 

That can’t be right. We have designated public forums in the physical world, like when a city council rents out a church auditorium or Lions Club hall for a public meeting. All speech in a designated public forum is accorded the strong First Amendment rights found in traditional public forums. I’m unaware of a case on the subject but a court is unlikely to allow the private owner of a designated public forum, like a church, to censor or dictate who can speak when its facilities are used as a designated public forum.

The straightforward implication from Knight Institute v. Trump seems to be that neither politicians nor social media companies can make viewpoint-based decisions about who can comment on or access an official’s social media account.

Knight Institute creates more First Amendment problems than it solves, and could be reversed someday. [Ed. update: In April 2021, the Supreme Court vacated the 2nd Circuit decision as moot since Trump is no longer president. However, a federal district court in Florida concluded, in Attwood v. Clemons, that public officials’ “social media accounts are designated public forums.” The Knight Institute has likewise sued Texas Attorney General Paxton for blocking user and claimed that his social media feed is a designated public forum. It’s clear more courts will adopt this rule.] But to the extent Knight Institute v. Trump is good law, it seems to limit how social media companies moderate public officials’ pages and feeds.

Cloud neutrality

How should tech companies, lawmakers, and courts interpret Sec. 512?

Wired recently published a piece about “cloud neutrality,” which draws on net neutrality norms of nondiscrimination towards content and applies them to Internet infrastructure companies. I’m skeptical of the need or constitutionality of the idea but, arguably, the US has a soft version of cloud neutrality embedded in Section 512 of the DMCA.

The law conditions the copyright liability safe harbor for Internet infrastructure companies only if: 

the transmission, routing, provision of connections, or storage is carried out through an automatic technical process without selection of the material by the service provider.

17 USC § 512(a).

Perhaps a copyright lawyer can clarify, but it appears that Internet infrastructure companies may lose their copyright safe harbor if they handpick material to censor. To my knowledge, there is no scholarship or court decision on this question.

State Action

What evidence would a user-plaintiff need to show that their account or content was removed due to state action?

Most complaints of state action for social media companies’ content moderation are dubious. And while showing state action is hard to prove, in narrow circumstances it may apply. The Supreme Court test has said that when there is a “sufficiently close nexus between the State and [a] challenged action,” the action of a private company will be treated as state action. For that reason, content removals made after non-public pressure or demands from federal and state officials to social media moderators likely aren’t protected by the First Amendment or Section 230.

Most examples of federal and state officials privately jawboning social media companies will never see the light of day. However, it probably occurs. Based on Politico reporting, for instance, it appears that state officials in a few states leaned on social media companies to remove anti-lockdown protest events last April. It’s hard to know exactly what occurred in those private conversations, and Politico has updated the story a few times, but examples like that may qualify as state action.

Any public official who engages in non-public jawboning resulting in content moderation could also be liable to a Section 1983 claim–civil liability for deprivation of an affected user’s constitutional rights.

Finally, what should Congress do about foreign state action that results in tech censorship in the US? A major theme of the Stretechery interviews ist that many tech companies feel pressure to set their moderation standards based on what foreign governments censor and prohibit. Content removal from online services because of foreign influence isn’t a First Amendment problem, but it is a serious free speech problem for Americans.

Many Republicans and Democrats want to punish large tech companies for real or perceived unfairness in content moderation. That’s politics, I suppose, but it’s a damaging instinct. For one thing, the Section 230 fixation distract free-market and free-speech advocates from, among other things, alarming proposals for changes to the FEC that empower it to criminalize more political speech. The singular focus on Section 230 repeal-reform distracts from these other legal questions about content moderation. Hopefully the Biden DOJ or congressional hearings will take some of these up.

Here’s a new animated explainer video that I narrated for the Federalist Society’s Regulatory Transparency Project. The 3-minute video discusses how earlier “tech giants” rose and fell as technological innovation and new competition sent them off to what the New York Times once appropriately called “The Hall of Fallen Giants.” It’s a continuing testament to the power of “creative destruction” to upend and reorder markets, even as many pundits insist that there’s no possibility change can happen.

This is an important lesson for us to remember today, as I noted in the recent editorial for The Hill about why, “Open-ended antitrust is an innovation killer“: Continue reading →

[Last updated 3/25/22]

Industrial Policy is a red-hot topic once again with many policymakers and pundits of different ideological leanings lining up to support ambitious new state planning for various sectors — especially 5G, artificial intelligence, and semiconductors. A remarkably bipartisan array of people and organizations are advocating for government to flex its muscle and begin directing more spending and decision-making in various technological areas. They all suggest some sort of big plan is needed, and it is not uncommon for these industrial policy advocates to suggest that hundreds of billions will need to be spent in pursuit of those plans.

Others disagree, however, and I’ll be using this post to catalog some of their concerns on an ongoing basis. Some of the criticisms listed here are portions of longer essays, many of which highlight other types of steps that governments can take to spur innovative activities. Industrial policy is an amorphous term with many definitions of a broad spectrum of possible proposals. Almost everyone believes in some form of industrial policy if you define the term broadly enough. But, as I argued in a September 2020 essay “On Defining ‘Industrial Policy,” I believe it is important to narrow the focus of the term such that we can continue to use the term in a rational way. Toward that end, I believe a proper understanding of industrial policy refers to targeted and directed efforts to plan for specific future industrial outputs and outcomes.

The collection of essays below is merely an attempt to highlight some of the general concerns about the most ambitious calls for expansive industrial policy, many of which harken back to debates I was covering in the late 1980s and early 1990s, when I first started a career in policy analysis. During that time, Japan and South Korea were the primary countries of concern cited by industrial policy advocates. Today, it is China’s growing economic standing that is fueling calls for ambitious state-led targeted investments in “strategic” sectors and technologies. To a lesser extent, grandiose European industrial policy proposals are also prompting new US counter-proposals.

All this activity is what has given rise to many of the critiques listed below. If you have suggestions for other essays I might add to this list, please feel free to pass them along. FYI: There’s no particular order here.

Continue reading →

In our latest feature for Discourse magazine, Connor Haaland and I explore the question, “Should the U.S. Copy China’s Industrial Policy?” We begin by noting that:

Calls for revitalizing American industrial policy have multiplied in recent years, with many pundits and policymakers suggesting that the U.S. should consider taking on Europe and China by emulating their approaches to technological development. The goal would be to have Washington formulate a set of strategic innovation goals and mobilize government planning and spending around them.

We continue on to argue that what most of these advocates miss is that:

China’s targeting efforts are often antithetical to both innovation and liberty, and involve plenty of red tape and bureaucracy. China has become a remarkably innovative country for many reasons, including its greater tolerance for risk-taking, even as the Chinese Communist Party continues to pump resources into strategic sectors. But most Chinese innovation is permissible only insomuch as it furthers the party’s objectives, a strategy the U.S. obviously wouldn’t want to copy.

We discuss the problems associated with some of those Chinese efforts as well as proposed US responses, like the recently released 756 page report from the National Security Commission on Artificial Intelligence. The report takes an everything-and-the-kitchen-sink approach to state direction for new AI-related efforts and spending. While that report says the government now must “drive change through top-down leadership” in order to “win the AI competition that is intensifying strategic competition with China,” we argue that there could be some serious pitfalls with top-down, high price tag approaches.

Jump over to the Discourse site to read the full essay, as well as our previous essay, which asked, “Can European-Style Industrial Policies Create Tech Supremacy?” These two essay build on the research Connor and I have been doing on global artificial intelligence policies in the US, China, and the EU. In a much longer forthcoming white paper, we explore both the regulatory and industrial policy approaches for AI being adopted in the US, China, and the EU. Stay tuned for more.