People are smarter than they look (or, why I’m not worried about solutionism)

by on March 3, 2013 · 6 comments

solutionism

In the New York Times today, Evgeny Morozov indicts the “solutionism” of Silicon Valley, which he defines as the “intellectual pathology that recognizes problems as problems based on just one criterion: whether they are ‘solvable’ with [technology].” This is the theme of his new book, To Save Everything, Click Here, which I’m looking forward to reading.

Morozov is absolutely right that there is a tendency among the geekerati to want to solve things that aren’t really problems, but I think he overestimates the effects this has on society. What are the examples of “solutionism” that he cites? They include:

  • LivesOn, a yet-to-launch service that promises to tweet from your account after you have died
  • Superhuman, another yet-to-launch service with no public description
  • Seesaw, an app that lets you poll friends for advice before making decisions
  • A notional contact lens product that would “make homeless people disappear from view” as you walk about

It should first be noted that three of these four products don’t yet exist, so they’re straw men. But let’s grant Morozov’s point, that the geeks are really cooking these things up. Does he really think that no one besides him sees how dumb these ideas are?

If LivesOn gets off the ground, how many people does he suppose will sign up to tweet beyond the grave? Or take Seesaw, the one real product on the list. How many people are really going to use the app for anything more than having a bit of fun once in a while? Surely Morozov wouldn’t use such an app to make important decisions, nor would he use it to slavishly make every minor decision of life, so why would he expect that anyone else would?

It seems to me that a more balanced list of “solutionist” technologies might include applications like Google’s Ngram Viewer, a technology that no one really had a clear idea what it would be good for until it was built. The fact that a technology was spawned by a solutionist tendency doesn’t make it bad, and I’m confident society will reject the bad or stupid ones.

What seems to worry Morozov the most, though, are personal data mining technologies that expose our inconsistencies. He quotes Polish philosopher Leszek Kolakowski:

“The breed of the hesitant and the weak …of those …who believe in telling the truth but rather than tell a distinguished painter that his paintings are daubs will praise him politely,” he wrote, “this breed of the inconsistent is still one of the main hopes for the continued survival of the human race.” If the goal of being confronted with one’s own inconsistency is to make us more consistent, then there is little to celebrate here.

But there’s no reason to think that a technology that confronts one with one’s inconsistencies must also propel one inexorably toward unthinking consistency. To the contrary, such technology might be a way of surfacing choices to be seriously contemplated (something Morozov seems to like).

Take the example posed by Gordon Bell that Morozov quotes (and mocks): “Imagine being confronted with the actual amount of time you spend with your daughter rather than your rosy accounting of it.” At that moment you are presented with a choice. It’s not clear from the mere fact that there is an inconsistency that the obvious course of action is to spend more time with your daughter. It probably does mean that you will want to reflect on your actions and priorities, and you might well conclude that no change is required.

I think Emerson put it better than Kolakowski: “A foolish consistency is the hobgoblin of little minds.” The operative word is foolish. If we treat the mere discovery of an inconsistency as an answer rather than a question, then yes, that would be problematic. But I don’t see what it is about the mere existence of technologies like personal data mining that would rob people of their autonomy and their reasoning faculties.

Morozov is again right when he says,

Whenever technology companies complain that our broken world must be fixed, our initial impulse should be to ask: how do we know our world is broken in exactly the same way that Silicon Valley claims it is? What if the engineers are wrong and frustration, inconsistency, forgetting, perhaps even partisanship, are the very features that allow us to morph into the complex social actors that we are?

But we do indeed ask ourselves these questions all the time—not only in the pages of the NYT Sunday Review and Dissent, but more importantly in the marketplace. I predict that LivesOn won’t amount to much, that Seesaw will get acquired by some bigger company and then shut down, that Superhuman will be some kind of souped-up to-do list, and (going out on a limb) that we’ll never have homeless-hiding contact lenses. Just because Silicon Valley dreams up these things doesn’t mean that people will want them.

As for automatic life logging and personal data mining, I predict it is here to stay because people will like the benefits. Will it turn us into robots? Not any more than Amazon’s recommendation algorithms make us buy anything. Have we all bought more as a result of such recommendations? Sure, but as a matter of choice that few of us regret. I could be wrong, but I think people will decide that they can do without some kinds of frustration and forgetting, and that they would at least like to be aware of their inconsistencies, and that it’ll be O.K. because they’ll reject the stupid technologies and uses of technologies that diminish their humanity.

Previous post:

Next post: