The Incoherent Singularity

by on July 2, 2008 · 10 comments

I don’t know how I missed it, but Reason‘s Ron Bailey had a great interview with libertarian entrepreneur Peter Thiel back in May. There’s a lot of discussion of the singularity, a concept I’m finding less coherent the more I think of it. The basic concept is that at some point computers will get powerful enough that we’ll be able to build machines that are smarter than the smartest human, and at that point history becomes unpredictable because the smarter-than-human robots will start doing things that we can’t understand with our puny human brains.

It seems to me that this story has three really serious problems. First, it’s a mistake to view intelligence as a linear scale that stretches upwards to infinity. The invention of the IQ scale probably contributes to the illusion here. If people can have IQs of 100, 120, or 150, why can’t we build robots with IQs of 300? Or 10,000. But the IQ scale measures peoples’ ability to perform tasks that we’ve found are correlated with the attribute we generally describe as intelligence in human beings. It’s likely to be highly contingent on our culture’s specific cultural endowments. If you built a robot to score as high as possible on an IQ test, it would be a really good simulation of a human being. It wouldn’t be some kind of all-powerful super-being.

The second problem with this story is that I think it’s wrong to think of “the” singularity as a unique event in history. In reality, there are singularities all around us. My fiancée, for example, entered a singularity of sorts when she entered medical school. She now inhabits a world that I know very little about—I don’t understand most of what she talks about when she chats with other med students, and for the most part I don’t even get to see what she does on a daily basis. Similarly, I’m in a technological singularity from the perspective of my grandfather, who is barely able to send and receive email and has yet to figure out how to surf the web. Virtually every aspect of my profession is a mystery to him.

Yet I’m still perfectly able to relate to both of them. We continue to have many things in common, and we find mutual benefit in conversation, socializing, and so forth. We may be separated from one another by “singularities” in some aspects of our lives, but we’re also drawn together by bonds of common blood and mutual interests.

The same sort of relationship would exist between human beings and super-intelligent robots. As computers get smarter, we’ll gradually delegate more and more tasks to them. Just as we currently rely on computers to switch our phone calls and balance our checkbooks, someday we’ll rely on them to prove our physics theorems and manage our supply chains. But we’ll continue to be good at other things. In particular, much of “intelligence” flows from a diversity of experience and local knowledge.

No matter how smart computers get, they won’t have the benefit of the kinds of experiences that flow from being a human being. Human intelligence isn’t something you can simulate because it’s ultimately the product of decades of being a human being and having the kinds of experiences human beings have. (You could simulate a human being’s life, but the things you’d learn from that would be limited by the parameters of the simulation) Hence, even if computers become superior in all manner of formal intellectual tasks, they’ll still find humans useful for our ability to bring a unique perspective to problems.

Finally, and most seriously, it seems to me that we’re also surrounded by singularities when it comes to computers. Google, for example, performs computations that no human being could hope to reproduce or even understand in any detail. Over time, as we increasingly use adaptive programming techniques like genetic programming and neural networks, we’ll have more and more pieces of software that perform useful tasks in ways that people don’t really understand. Today, I don’t understand how Google decides which results to return for any given search term. In 2030, I won’t understand how computer knew about an impending earthquake a week before human analysts predicted it. By 2050, the human race may collectively have no understanding of how the Skynet manages the world’s supply chain with hardly any human input. I see these developments as continuous with what’s already happening (and has been happening for the last couple of decades).

All of which leaves me thinking that “the singularity” is an incoherent, possibly silly, concept. For at least a century, people have lived with a kind of historical event horizon that limited their ability to tell the future. Someone in 1975 could not have predicted what would happen in 2000. Someone in 1930 could not have predicted the 1960s. Someone in 1850 could not have predicted the early 20th century. The accelerating pace of technology may bring that event horizon a little bit closer—I don’t think we can say much about what the 2030s will be like, for example—but this is not a new development, it’s the way things have been since the start of the industrial revolution.

Previous post:

Next post: