Smart Device Paranoia

by on October 5, 2015 · 0 comments

The idea that the world needs further dumbing down was really the last thing on my mind. Yet this is exactly what Jay Stanley argues for in a recent post on Free Future, the ACLU tech blog.

Specifically, Stanley is concerned by the proliferation of “smart devices,” from smart homes to smart watches, and the enigmatic algorithms that power them. Exhibit A: The Volkswagen “smart control devices” designed to deliberately mis-measure diesel emissions. Far from an isolated case, Stanley extrapolates the Volkswagen scandal into a parable about the dangers of smart devices more generally, and calls for the recognition of “the virtue of dumbness”:

When we flip a coin, its dumbness is crucial. It doesn’t know that the visiting team is the massive underdog, that the captain’s sister just died of cancer, and that the coach is at risk of losing his job. It’s the coin’s very dumbness that makes everyone turn to it as a decider. … But imagine the referee has replaced it with a computer programmed to perform a virtual coin flip. There’s a reason we recoil at that idea. If we were ever to trust a computer with such a task, it would only be after a thorough examination of the computer’s code, mainly to find out whether the computer’s decision is based on “knowledge” of some kind, or whether it is blind as it should be.

While recoiling is a bit melodramatic, it’s clear from this that “dumbness” is not even the key issue at stake. What Stanley is really concerned about is biasedness or partiality (what he dubs “neutrality anxiety”), which is not unique to “dumb” devices like coins, nor is the opacity. A physical coin can be biased, a programmed coin can be fair, and at first glance the fairness of a physical coin is not really anymore obvious.

Yet this is the argument Stanley uses to justify his proposed requirement that all smart device code be open to the public for scrutiny going forward. Based on a knee-jerk commitment to transparency, he gives zero weight to the social benefit of allowing software creators a level of trade secrecy, especially as a potential substitute to patent and copyright protections. This is all the more ironic, given that Volkswagen used existing copyright law to hide its own malfeasance.

More importantly, the idea that the only way to check a virtual coin is to look at the source code is a serious non-sequitur. After all, in-use testing was how Volkswagen was actually caught in the end. What matters, in other words, is how the coin behaves in large and varied samples. In either the virtual or physical case, the best and least intrusive way to check a coin is to simply do thousands of flips. But what takes hours with a dumb coin takes a fraction of a second with a virtual coin. So I know which I prefer.

An hour versus a second may seem like a trivial advantage, but as an object or problem becomes more complex the opacity and limitations of “dumb” things only grow. Tom Brady’s “dumb” football is a case in point. After deflategate, I have much more confidence in the unbiasedness of the virtual ball in Madden. And to eliminate any doubt, I can once again run simulations – a standard practice among video game designers. This is what allows balance to be achieved in complex, asymmetrical video game maps, for example, while American football is stuck with a rectangle and switching ends at half-time.

In other words, despite Stanley’s repeated assertion that smart devices inevitably sacrifice equity for ruthless efficiency (like a hypothetical traffic light that turns green when it detects surgeons and corporate VPs), embedding algorithms is a demonstrably useful tool for achieving equity in the face of complexity that mirrors the real world. Think, for instance, of the algorithms that draw congressional districts to eliminate gerrymandering.

Yet even if smart devices and algorithms can improve both efficiency and equity, nonetheless they require a dose of human intention and therein lies the danger. Or does it?

Imagine a person, running late for something crucial, sitting at a seemingly interminable red light getting tense and angry. Today he may rail at his bad luck and at the universe, but in the future he will feel he’s the victim of a mind—and of whatever political entities are responsible for the shape of that signal’s logic.

In this future world of omnipresent agency, Stanley essentially imagines a pandemic of paranoid schizophrenia, where conspiracies lurk in every corner, and strings of bad luck are interpreted as punishment by the puppet masters. But this seems to get things exactly backwards. Smart devices are useful precisely because they remove agency, both in terms our personal cognitive effort (like when the lights turn on as you enter a room), and in terms of discretionary influence over our lives.

In this respect, one of Stanley’s own examples directly contradicts his thesis. He points to

an award-winning image of a Gaza City funeral procession, which was challenged due to manual adjustments the photographer made to its tone. I suspect that if the adjustments had been made automatically by his camera (being today little more than a specialized computer), the photo would not have been questioned.

Exactly! The smart focus and light balance of a modern point and click camera not only makes us all better photographers, but it removes worry of unfair and manipulative human input. Afterall, before normal traffic lights was the traffic guard, who let drivers through at his or her discretion. The move to automated lights condensed that human agency to the point of initial creation, thus dramatically reducing the potential for abuse. If smart devices mean we can automatically detect an ambulance or adjust camera aperture, it’s precisely the same sort of improvement.

The fact is that a benign rationality is already replete in the world around us, embedded not just in our technology, but also in our laws and institutions. Externalizing intelligence into rules and structures is the stuff of civilization what’s called “extended cognition”. In the words of philosopher Andy Clark:

Advanced cognition depends crucially on our ability to dissipate reasoning: to diffuse achieved knowledge and practical wisdom through complex social structures, and to reduce the loads on individual brains by locating those brains in complex webs of linguistic, social, political and institutional constraints.

And yet we go through life without constantly looking over our shoulders. This is because we have adapted to the point where we are happily ignorant of the intelligence surrounding us. The hiddenness is a feature, not a bug, as it allows our attention to move on to more pressing things.

Critics of new technology always fail to appreciate this adaptability of human beings, implicitly answering 21st century thought experiments with 20th century prejudices. The enduring lesson of extended cognition is that smart devices promise to make not just our stuff but us, as living creatures, in a very real way more intelligent, expanding our own capabilities rather than subordinating us to the whim of invisible others.

To that end, I can’t help be reminded of the tagline at TechLiberation.com: “The problem is not whether machines think, but whether men do.”

Previous post:

Next post: