I have been covering telecom and Internet policy for almost 30 years now. During much of that time – which included a nine year stint at the Heritage Foundation — I have interacted with conservatives on various policy issues and often worked very closely with them to advance certain reforms.

If I divided my time in Tech Policy Land into two big chunks of time, I’d say the biggest tech-related policy issue for conservatives during the first 15 years I was in the business (roughly 1990 – 2005) was preventing the resurrection of the so-called Fairness Doctrine. And the biggest issue during the second 15-year period (roughly 2005 – present) was stopping the imposition of “Net neutrality” mandates on the Internet. In both cases, conservatives vociferously blasted the notion that unelected government bureaucrats should sit in judgment of what constituted “fairness” in media or “neutrality” online.

Many conservatives are suddenly changing their tune, however. President Trump and Sen. Ted Cruz, for example, have been increasingly critical of both traditional media and new tech companies in various public statements and suggested an openness to increased regulation. The President has gone after old and new media outlets alike, while Sen. Cruz (along with others like Sen. Lindsay Graham) has suggested during congressional hearings that increased oversight of social media platforms is needed, including potential antitrust action.

Meanwhile, during his short time in office, Sen. Josh Hawley (R-Mo.) has become one of the most vocal Internet critics on the Right. In a shockingly-worded USA Today editorial in late May, Hawley said, “social media wastes our time and resources” and is “a field of little productive value” that have only “given us an addiction economy.” He even referred to these sites as “parasites” and blamed them for a long list of social problems, leading him to suggest that, “we’d be better off if Facebook disappeared” along with various other sites and services.

Hawley’s moral panic over social media has now bubbled over into a regulatory crusade that would unleash federal bureaucrats on the Internet in an attempt to dictate “fair” speech on the Internet. He has introduced an astonishing piece of legislation aimed at undoing the liability protections that Internet providers rely upon to provide open platforms for speech and commerce. If Hawley’s absurdly misnamed new “Ending Support for Internet Censorship Act” is implemented, it would essentially combine the core elements of the Fairness Doctrine and Net Neutrality to create a massive new regulatory regime for the Internet. Continue reading →

Cato Unbound is taking on the issue of tech expertise this month and the lead essay came from Kevin Kosar, who argues for the revival of the Office of Technology Assessment. As he explains,

[N]o one wants Congress enacting policies that make us worse off, or that delay or stifle technologies that improve our lives. And yet this kind of bad policy happens with lamentable frequency. Pluralistic politics inevitably features some self-serving interests that are more powerful and politically persuasive than others. This is why government often undertakes bailouts and other actions that are odious to the public writ large.  

He continues, “Congress’s ineptitude in [science and technology policy] has been richly displayed.” To help embed expertise in science and technology policy, Kosar argues for the revival of the Office of Technology Assessment, which was established in 1972 and defunded in 1995.

I have been on the OTA beat for a little while now, and so I offered some criticism of Kosar’s proposal, which you can find here. I’ll lay out my cards: I’ve been skeptical of reving the OTA in the past and I remain so. Here is my key graf on that:

Elsewhere, I have argued that the OTA should be seen as a last resort; there are other ways of embedding expertise in Congress, like boosting staff and reforming hiring practices. The following essay makes a slightly different argument, namely, that the history of the OTA shows the razor wire on which a revived version of agency will have to balance. In its early years, the OTA was dogged by accusations of partiality. Having established itself as a neutral party throughout the 1980s, the OTA was abolished because it failed to distinguish itself among competing agencies. There is an underlying political economy to expertise that makes the revival of the OTA difficult, undercutting it as an option for expanding tech expertise. In a modern political environment where scientific knowledge is politicized and budgets are tight, the OTA would likely face the hatchet once again. Continue reading →

Slate recently published an astonishing piece of revisionist history under the title, “Bring Back the Golden Age of Broadcast Regulation,” which suggested that the old media regulatory model of the past would be appropriate for modern digital media providers and platforms. In the essay, April Glaser suggests that policymakers should resurrect the Fairness Doctrine and a host of old Analog Era content controls to let regulatory bureaucrats address Digital Age content moderation concerns.

In a tweetstorm, I highlighted a few examples of why the so-called Golden Era wasn’t so golden in practice. I began by noting that the piece ignores the troubling history of FCC speech controls and unintended consequences of regulation. That regime gave us limited, bland choices–and a whole host of First Amendment violations. We moved away from that regulatory model for very good reasons.

For those glorifying the Fairness Doctrine, I encourage them to read the great Nat Hentoff’s excellent essay, “The History & Possible Revival of the Fairness Doctrine,” about the real-world experience of life under the FCC’s threatening eye. Hentoff notes: Continue reading →

Two weeks ago, Gov. Polis signed a bill that generally cuts off Colorado state funds from ISPs that commit “net neutrality violations” in the state. Oddly, I’ve seen no coverage from national outlets and barely a mention from local outlets. Perhaps journalists and readers have tired from what Larry Downes has dubbed the net neutrality farce, a debate about Internet regulation that has distracted the FCC and lawmakers for over a decade.

There’s not much new in the net neutrality debate, but Colorado did tread new ground: a House amendment to allow ISPs to filter adult content barely failed, on a tied vote 32-32. Net neutrality in the US runs into First Amendment and Section 230 problems, and that amendment is the first time I’ve seen the issue raised by a state legislature.

A few thoughts on the law because in March I was invited to testify before a Colorado House committee about net neutrality, broadband, and the policy implications of the then-pending bill. I commended the bill drafters for scrupulously attempting to narrow their bill to intra-state consumer protection issues. Nevertheless, it was my view that the Colorado law, as written, wouldn’t survive judicial review if litigated.

States can have agreements with vendors and contractors and can require them to abide by certain contractual terms. However, courts have held that states cannot, as Seth Cooper has pointed out, use their contractual relationships with firms to extract concessions that are “tantamount to regulation.” State agencies cannot attempt an end-around federal laws that prevent state regulation of Internet services generally, and net neutrality regulation in particular.

My testimony:

Good afternoon. My name is Brent Skorup and I am a senior research fellow at the Mercatus Center at George Mason University. I also serve on the Broadband Deployment Advisory Committee of the Federal Communications Commission (FCC).

It is commendable that state legislatures, governors, and cities around the country, including in Colorado, are prioritizing broadband deployment. The focus should remain on the pressing broadband issues of competition and deployment. The political battles in Washington, DC, about net neutrality, which I have observed over the past decade, have alarmingly spread to statehouses in recent months, and they will distract from far more important issues.

Lawmakers should enter the debate with their eyes wide open about the stakes and the unintended effects of internet regulation. By imposing network management rules on certain providers, SB 19-078 conflicts with federal policy, codified in the Telecommunications Act, that internet access should be “unfettered by Federal or State regulation.”

First, net neutrality laws and regulations do not accomplish what they purportedly accomplish. As the FCC revealed when it defended its net neutrality regulations in federal court in 2016, any no-blocking rule is mostly unenforceable. As a tech journalist put it, internet service providers (ISPs) can “exempt [themselves] from the net neutrality rules”—the rules are “essentially voluntary.” The same problem arises with state net neutrality laws.

Second, state internet regulations are unlikely to survive judicial review. Internet access is inherently interstate: simply streaming a YouTube video or sending an email often transmits data across state lines. State attempts to regulate treatment of internet access therefore likely violate federal law, which vests authority to regulate interstate communications with the FCC.

Third, the bill penalizes small, rural carriers. There’s a saying in politics: “If you’re not at the table, you’re on the menu.” It appears that Colorado’s rural broadband providers are “on the menu.” The bill applies internet regulations only to companies receiving state support (13 companies, each one serving rural areas). With the exception of CenturyLink, these are very small telecommunications companies, and the smallest had 64 customers. It is a puzzle why the state would add regulations and compliance costs to rural ISPs at a time when the FCC and most states are doing everything possible to help deploy broadband in rural areas.

This is not a plea to “do nothing” in Colorado regarding broadband. The FCC’s Broadband Deployment Advisory Committee has several recommendations for states and localities to improve broadband deployment.

Further, the FCC and some states are considering making it easier for private property owners to install wireless antennas without local regulation and fees, much like how satellite dishes are installed.

Finally, the legislature could also urge flexibility from the FCC regarding the federal high-cost fund, which disburses about $60 million annually to carriers in Colorado. My preliminary estimates using FCC data suggest that, under a new voucher program, every rural household in Colorado could receive $15 to $20 per month to reduce their monthly broadband bill.

Testimony on the Mercatus website here.

[This essay originally appeared on the AIER blog on May 28, 2019. The USA TODAY also ran a shorter version of this essay as a letter to the editor on June 2, 2019.]

In a hotly-worded USA Today op-ed last week, Senator Josh Hawley (R-Missouri) railed against social media sites Facebook, Instagram, and Twitter. He argued that, “social media wastes our time and resources,” and is “a field of little productive value” that have only “given us an addiction economy.” Sen. Hawley refers to these sites as “parasites” and blames them for a litany of social problems (including an unproven link to increased suicide), leading him to declare that, “we’d be better off if Facebook disappeared.”

As far as moral panics go, Sen. Hawley’s will go down as one for the ages. Politicians have always castigated new technologies, media platforms, and content for supposedly corrupting the youth of their generation. But Sen. Hawley’s inflammatory rhetoric and proposals are something we haven’t seen in quite some time.

He sounds like those fire-breathing politicians and pundits of the past century who vociferously protested everything from comic books to cable television, the waltz to the Walkman, and rock-and-roll to rap music. In order to save the youth of America, many past critics said, we must destroy the media or media platforms they are supposedly addicted to. That is exactly what Sen. Hawley would have us do to today’s leading media platforms because, in his opinion, they “do our country more harm than good.”

We have to hope that Sen. Hawley is no more successful than past critics and politicians who wanted to take these choices away from the public. Paternalistic politicians should not be dictating content choices for the rest of us or destroying technologies and platforms that millions of people benefit from. Continue reading →

[This essay originally appeared on the AIER blog on May 23, 2019 under the title, “Spring Cleaning for the Regulatory State.”]

_____________________________

Spring is in full blossom, and many of us are in the midst of our annual house-cleaning ritual. A regular deep clean makes good sense because it makes our living spaces more orderly and gets rid of the gunk and grime that has amassed over the past year.

Unfortunately, governments almost never engage in their own spring-cleaning exercise. Statutes and regulations continue to accumulate, layer by layer, until they suffocate not only economic opportunity, but also the effective administration of government itself. Luckily, some states have realized this and have taken steps to help address this problem.

Mountains of Regulations

First, here are some hard facts about regulatory accumulation:

  • Red tape grows: Since the first edition of his annual publication Ten Thousand Commandments in 1993, Wayne Crews has documented how federal agencies have issued 101,380 rules. Other reports find agency staffing levels jumped from 57,109 to 277,163 employees from 1960 to 2017, while agency budgets swelled in real terms from $3 billion in 1960 to $58 billion in 2017 (2009$).
  • Nothing ever gets cleaned up: A Deloitte survey of U.S. Code reveals that 68 percent of federal regulations have never been updated and that 17 percent have only been updated once. If a company never updated its business model, it would fail eventually. But governments get away with doing the same thing without any fear of failure. “If it were a country, U.S. regulation would be the world’s eighth-largest economy, ranking behind India and ahead of Italy,” Crews notes.
  • The burden of regulatory accumulation is getting worse: “The estimate for regulatory compliance and economic effects of federal intervention is $1.9 trillion annually,” Crews finds, which is equal to 10 percent of the U.S. gross domestic product for 2017. When federal spending is added to regulatory costs are added to federal spending, Crews finds, the burden equals $4.173 trillion, or 30 percent of the entire economy. Mercatus Center research has found that “economic growth in the United States has, on average, been slowed by 0.8 percent per year since 1980 owing to the cumulative effects of regulation.” This means that “the US economy would have been about 25 percent larger than it actually was as of 2012” if regulation had been held to roughly the same aggregate level it stood at in 1980.

In sum, the evidence shows that the red tape is growing without constraint, hindering entrepreneurship and innovation, deterring new investment, raising costs to consumers, limiting worker opportunities/wages, and undermining economic growth.

Regulations accumulate in this fashion because the administrative state is on autopilot. Legislatures pass broad statutes delegating ambiguous authority to agencies. Bureaucrats are then free to roll the regulatory snowball down the hill until it has become so big that its momentum cannot be stopped.

The Death of Common Sense

Policy makers enact new rules with the best of intentions, of course, but we should not assume that the untrammeled growth of the regulatory state produces positive results. There is no free lunch, after all. Every regulation is a restriction on opportunities for experimentation with new and potentially better ways of doing things. Sometimes such restrictions make sense because regulations can pass a reasonable cost-benefit test. It would be foolish to assume that all regulations on the books do.

Spring cleaning for the regulatory state, therefore, should be viewed as an exercise in “good governance.” The goal is not to get rid of all regulations. The goal is to make sure that rules are reasonable and cost-effective so that the public can actually understand the law and get the highest value out of their government institutions.

Philip K. Howard, founder and chair of the nonprofit coalition Common Good and the author of The Death of Common Sense, has written extensively about how regulatory accumulation has become a chronic problem. “Too much law,” he argues, “can have similar effects as too little law.” “People slow down, they become defensive, they don’t initiate projects because they are surrounded by legal risks and bureaucratic hurdles,” Howard notes. “They tiptoe through the day looking over their shoulders rather than driving forward on the power of their instincts. Instead of trial and error, they focus on avoiding error.”

In such an environment, risk-taking and entrepreneurialism are more challenging and economic dynamism suffers. But regulatory accumulation also hurts the quality of government institutions and policies, which become fundamentally incomprehensible or illogical. “Society can’t function when stuck in a heap of accumulated mandates of past generations,” Howard concludes. This is why an occasional regulatory house cleaning is essential to unleash economic opportunity and improve the functioning of our democratic institutions.

Regulatory House Cleaning Begins

Reforms to address this problem are finally happening. In a series of new essays, my colleague James Broughel has documented how several states — including IdahoOhioVirginia, and New Jersey — are undertaking serious efforts to get regulatory accumulation under control. They are utilizing a variety of mechanisms, including “regulatory reduction pilot programs” and “red tape review commissions.” Recently, Idaho actually initiated a sunset of its entire regulatory code and will now try to figure out how to clean up its 8,200 pages of regulations containing 736 chapters of state rules.

Meanwhile, other states are undertaking serious reform in one of the worst forms of regulatory accumulation: occupational licenses. The Federal Trade Commission notes that roughly 30 percent of American jobs require a license today, up from less than 5 percent in the 1950s. Research by economist Morris Kleiner and others finds that “restrictions from occupational licensing can result in up to 2.85 million fewer jobs nationwide, with an annual cost to consumers of $203 billion.” And many of the rules do not even serve their intended purpose. A major 2015 Obama administration report on the costs of occupational licensing concluded that “most research does not find that licensing improves quality or public health and safety.”

ArizonaWest Virginia, and Nebraska are among the leaders in reforming occupational-licensing regimes using a variety of approaches. In some cases, the reforms sunset licensing rules for specific professions altogether. Other proposals grant workers reciprocity to use a license they obtained in another state. Finally, some states have proposed letting most professions operate without any license at all but then requiringall, but then require them to make it clear to consumers that they are unlicensed.

The Need for a Fresh Look

Sunsets are not silver-bullet solutions, and the recent experience with sunsetting and “de-licensing” requirements at the state level has been mixed because many legislatures ignore or circumvent requirements. Nonetheless, sunsets can still help prompt much-needed discussions about which rules make sense and which ones no longer do.

Sunsets can be forward-looking, too. I have proposed that when policy makers craft new laws, especially for fast-paced tech sectors, they should incorporate a clause that what we might think of as “the Sunsetting Imperative.” It would demand that any existing or newly imposed technology regulation should include a provision sunsetting the law or regulation within two years. Reforms like these are also sometimes referred to as “temporary legislation” or “fresh look” requirements. Policy makers can always reenact rules that are still relevant and needed.

By forcing a periodic spring cleaning, sunsets and fresh-look requirements can help stem the tide of regulatory accumulation and ensure that only those policies that serve a pressing need remain on the books. There is no good reason for governments not to clean up their messes on occasion, just like the rest of us have to.

Congress should let the Satellite Television Extension and Localism Act Reauthorization (STELAR) of 2014 expire at the end of this year. STELAR is the most recent reincarnation of the Satellite Home Viewer Act of 1988, a law that has long since outlived it’s purposes.

Owners of home satellite dishes in the 1980s—who were largely concentrated in rural areas—were receiving retransmission of popular television programs via satellite carriers in apparent violation of copyright law. When copyright owners objected, Congress established a compulsory, statutory license mandating that content providers allow secondary transmission via satellite to areas unserved by either a broadcaster or a cable operator, and requiring satellite carriers to compensate copyright holders at the rate of 3 cents per subscriber per month for the retransmission of a network TV station or 12 cents for a cable superstation.

The retransmission fees were purposely set low to help the emerging satellite carriers get established in the marketplace when innovation in satellite technology still had a long way to go. Today the carriers are thriving business enterprises, and there is no need for them to continue receiving subsidies. Broadcasters, on the other hand, face unprecedented competition for advertising revenue that historically covered the entire cost of content production.

Today a broadcaster receives 28 cents per subscriber per month when a satellite carrier retransmits their local television signal. But the fair market value of that signal is actually $2.50, according to one estimate.

There is no reason retransmission fees cannot be “determined in the marketplace through negotiations among carriers, broadcasters and copyright holders,” as the Reagan administration suggested in 1988.

Aside from perpetuating an unjustified subsidy, renewal of STELAR may deprive owners of home satellite dishes in the nation’s twelve smallest Designated Market Areas from receiving programming from their own local broadcast TV stations.

Due to severe capacity constraints inherent in satellite technology in the 1980s, the statutory license originally allowed satellite carriers to retransmit a single, distant signal (e.g. from a New York or Los Angeles network affiliate) throughout their entire footprint. As the technology has improved, the statutory license has been expanded in recent years to include local-into-local retransmission. DISH Network, which already provides local-into-local retransmission throughout the nation (in all 210 DMAs), has demonstrated that a statutory license for distant signals is no longer necessary or warranted.

Although DirecTV does not yet offer nationwide local-into-local retransmission, this is a voluntary business decision that should not dictate the renewal of a statutory license based on 30 year old technology.


An interesting divide has opened up in recent months among right-of-center groups about what the FCC should do with the “C Band.” A few weeks ago, the FCC requested public comment on how to proceed with the band.

The C Band is 500 MHz of spectrum that the FCC, like regulators around the globe, dedicated for satellite use years ago and gave to satellite companies to share among each other. Satellite operators typically use it to transmit cable programming to a regional cable network operations center, where it is bundled and relayed to cable subscribers. However, the C Band would work terrifically if repurposed for 5G and cellular services. As Joe Kane explained in a white paper, the FCC and telecom companies are exploring various ways of accomplishing that.

Free-market groups disagree. Should the FCC prioritize:

The quick deployment of new wireless services? Or:

Deficit reduction and limiting FCC-granted windfalls?

This is a complex question since we’re dealing with the allocation of public property. Both sides, in my view, have a defensible free-market position. There are other non-trivial C Band issues like interference protection and the FCC’s authority to act here, but I’ll address the ideological split on the right.

The case for secondary markets

The full 500 MHz of “clean” C Band in the US would be worth tens of billions to cellular companies. However, the current satellite users don’t want to part with all of it and a group of satellite companies using the spectrum estimate they could sell 200 MHz to cellular carriers if the FCC would liberalize its rules to allow flexible uses (like 5G), not merely satellite services. The satellite providers would then be able to sell much of their spectrum on the secondary market (probably to cellular providers) at a nice premium.

Prof. Dan Lyons and Roslyn Layton wrote in support of the secondary market plan on the AEI blog and at Forbes, respectively. Joe Kane also favors the approach. As they say, the benefit of secondary market sales is that it will likely lead a significant and fast repurposing of the C Band for mobile use. The consumer benefits of dezoned spectrum are large and with every year of inaction, billions of dollars of consumer welfare evaporate. Hazlett and Munoz estimate that spectrum reallocated from a restricted use to flexible use generates annual consumer benefits in the same order of magnitude as auction value of the spectrum.

I’d add that there’s a history of the FCC de-zoning spectrum (SMR spectrum in 2004, EBS spectrum in 2004, AWS-4 in 2011, WCS spectrum in 2012). The FCC is considering doing this with some government spectrum that Ligado or others could repurpose for mobile broadband. In these cases, the FCC upzoned spectrum so that it can be used for higher-valued uses, not legacy uses required by previous FCCs. The circumstances and technologies vary, but some of these bands were repurposed quickly for better uses by cellular providers and are used for 4G LTE today by tens of millions of Americans.

The case for FCC auction

Liberalizing spectrum quickly gets spectrum to higher-valued uses but does raise the complaint that the existing users are gaining an unfair windfall. I’m not sure when the C Band was allocated for satellite but many legacy assignments of spectrum were given to industries for free.

When the FCC “upzones” spectrum, it typically increases the value of the band. The “secondary market” plan is akin to the government giving away a parcel of public land to a developer to be used for a gas station, then deciding years later to upzone the land so that condo or office buildings can be built on it. It’s a better use for the land, but the gas station operator gains a big windfall when the property value increases. Not only is there a windfall, the government captures no revenue from the increase in the value of public property.

Free-market groups like Americans for Tax Reform, Taxpayers Protection Alliance, and Citizens Against Government Waste favor the FCC reclaiming the spectrum from satellite providers, perhaps via incentive auction, and collecting government revenue by re-selling it. If the FCC went the incentive auction route, the FCC would purchase the “satellite spectrum” (ie a low price) from the current C Band users, upzone it, and re-sell that spectrum as “mobile spectrum” (ie a high price) in an open auction. The FCC and the Treasury pocket the difference, probably several billion dollars here.

The FCC has only done one incentive auction, the 600 MHz auction. There, the FCC purchased “TV spectrum” from broadcasters and re-sold it to wireless carriers.

The benefit of this is deficit reduction and there’s more perceived fairness since there’s no big, FCC-granted windfall to legacy users. The downside is that it’s a slower, more complicated process since the FCC is deeply involved in the spectrum transfer. Arguably, however, the FCC should be deeply involved and interested in government revenue since spectrum is public property.

My view

A few years ago I would have definitely favored speed and the secondary market plan. I still lean towards that approach but I’m a little more on the fence after reading Richard Epstein’s work and others’ about the “public trust doctrine.” This is a traditional governance principle that requires public actors to receive fair value when disposing of public property. It prevents public institutions from giving discounted public property to friends and cronies. Clearly, cronyism isn’t the case here and FCC can’t undo what FCCs did generations ago in giving away spectrum. I think the need for speedy deployment trumps the windfall issue here, but it’s a closer call for me than in the past.

One proposal that hasn’t been contemplated with the C Band but might have merit is an overlay auction with a deadline. With such an auction, the FCC gives incumbent users a deadline to vacate a band (say, 5 years). The FCC then auctions flexible-use licenses in the band. The FCC receives the auction revenues and the winning bidders are allowed to deploy services in the “white spaces” unoccupied by the incumbents. The winning bidders are allowed to pay the incumbents to move out before the deadline.

With an overlay auction, you get fairly rapid deployment–at least in the white spaces–and the government gains revenue from the auction. This type of auction was used to deploy cellular (PCS) in the 1990s and cellular (AWS-1) in the 2000s. However, incumbents dislike it because the deadline devalues their existing spectrum holdings.

I think overlay auctions should be considered in more spectrum proceedings because they avoid the serious windfall problems while also allowing rapid deployment of new services. That doesn’t seem in the cards, however, and secondary markets seems like the next best option.

– Coauthored with Mercatus MA Fellow Walter Stover

The advent of artificial intelligence technology use in dynamic pricing has given rise to fears of ‘digital market manipulation.’ Proponents of this claim argue that companies leverage artificial intelligence (AI) technology to obtain greater information about people’s biases and then exploit them for profit through personalized pricing. Those that advance these arguments often support regulation to protect consumers against information asymmetries and subsequent coercive market practices; however, such fears ignore the importance of the institutional context. These market manipulation tactics will not have a great effect precisely because they lack coercive power to force people to open their wallets. Such coercive power is a function of social and political institutions, not of the knowledge of people’s biases and preferences that could be gathered from algorithms.

As long as companies such as Amazon operate in a competitive market setting, they are constrained in their ability to coerce customers who can vote with their feet, regardless of how much knowledge they actually gather about those customers’ preferences through AI technology. Continue reading →

I (Eye), Robot?

by on May 8, 2019 · 0 comments

[Originally published on the Mercatus Bridge blog on May 7, 2019.]

I became a little bit more of a cyborg this month with the addition of two new eyes—eye lenses, actually. Before I had even turned 50, the old lenses that Mother Nature gave me were already failing due to cataracts. But after having two operations this past month and getting artificial lenses installed, I am seeing clearly again thanks to the continuing miracles of modern medical technology.

Cataracts can be extraordinarily debilitating. One day you can see the world clearly, the next you wake up struggling to see through a cloudy ocular soup. It is like looking through a piece of cellophane wrap or a continuously unfocused camera.

If you depend on your eyes to make a living as most of us do, then cataracts make it a daily struggle to get even basic things done. I spend most of my time reading and writing each workday. Once the cataracts hit, I had to purchase a half-dozen pair of strong reading glasses and spread them out all over the place: in my office, house, car, gym bag, and so on. Without them, I was helpless.

Reading is especially difficult in dimly lit environments, and even with strong glasses you can forget about reading the fine print on anything. Every pillbox becomes a frightening adventure. I invested in a powerful magnifying glass to make sure I didn’t end up ingesting the wrong things.

For those afflicted with particularly bad cataracts, it becomes extraordinarily risky to drive or operate machinery. More mundane things—watching TV, tossing a ball with your kid, reading a menu at many restaurants, looking at art in a gallery—also become frustrating. Continue reading →