The Incoherent Singularity

by on July 2, 2008 · 10 comments

I don’t know how I missed it, but Reason‘s Ron Bailey had a great interview with libertarian entrepreneur Peter Thiel back in May. There’s a lot of discussion of the singularity, a concept I’m finding less coherent the more I think of it. The basic concept is that at some point computers will get powerful enough that we’ll be able to build machines that are smarter than the smartest human, and at that point history becomes unpredictable because the smarter-than-human robots will start doing things that we can’t understand with our puny human brains.

It seems to me that this story has three really serious problems. First, it’s a mistake to view intelligence as a linear scale that stretches upwards to infinity. The invention of the IQ scale probably contributes to the illusion here. If people can have IQs of 100, 120, or 150, why can’t we build robots with IQs of 300? Or 10,000. But the IQ scale measures peoples’ ability to perform tasks that we’ve found are correlated with the attribute we generally describe as intelligence in human beings. It’s likely to be highly contingent on our culture’s specific cultural endowments. If you built a robot to score as high as possible on an IQ test, it would be a really good simulation of a human being. It wouldn’t be some kind of all-powerful super-being.

The second problem with this story is that I think it’s wrong to think of “the” singularity as a unique event in history. In reality, there are singularities all around us. My fiancée, for example, entered a singularity of sorts when she entered medical school. She now inhabits a world that I know very little about—I don’t understand most of what she talks about when she chats with other med students, and for the most part I don’t even get to see what she does on a daily basis. Similarly, I’m in a technological singularity from the perspective of my grandfather, who is barely able to send and receive email and has yet to figure out how to surf the web. Virtually every aspect of my profession is a mystery to him.

Yet I’m still perfectly able to relate to both of them. We continue to have many things in common, and we find mutual benefit in conversation, socializing, and so forth. We may be separated from one another by “singularities” in some aspects of our lives, but we’re also drawn together by bonds of common blood and mutual interests.

The same sort of relationship would exist between human beings and super-intelligent robots. As computers get smarter, we’ll gradually delegate more and more tasks to them. Just as we currently rely on computers to switch our phone calls and balance our checkbooks, someday we’ll rely on them to prove our physics theorems and manage our supply chains. But we’ll continue to be good at other things. In particular, much of “intelligence” flows from a diversity of experience and local knowledge.

No matter how smart computers get, they won’t have the benefit of the kinds of experiences that flow from being a human being. Human intelligence isn’t something you can simulate because it’s ultimately the product of decades of being a human being and having the kinds of experiences human beings have. (You could simulate a human being’s life, but the things you’d learn from that would be limited by the parameters of the simulation) Hence, even if computers become superior in all manner of formal intellectual tasks, they’ll still find humans useful for our ability to bring a unique perspective to problems.

Finally, and most seriously, it seems to me that we’re also surrounded by singularities when it comes to computers. Google, for example, performs computations that no human being could hope to reproduce or even understand in any detail. Over time, as we increasingly use adaptive programming techniques like genetic programming and neural networks, we’ll have more and more pieces of software that perform useful tasks in ways that people don’t really understand. Today, I don’t understand how Google decides which results to return for any given search term. In 2030, I won’t understand how computer knew about an impending earthquake a week before human analysts predicted it. By 2050, the human race may collectively have no understanding of how the Skynet manages the world’s supply chain with hardly any human input. I see these developments as continuous with what’s already happening (and has been happening for the last couple of decades).

All of which leaves me thinking that “the singularity” is an incoherent, possibly silly, concept. For at least a century, people have lived with a kind of historical event horizon that limited their ability to tell the future. Someone in 1975 could not have predicted what would happen in 2000. Someone in 1930 could not have predicted the 1960s. Someone in 1850 could not have predicted the early 20th century. The accelerating pace of technology may bring that event horizon a little bit closer—I don’t think we can say much about what the 2030s will be like, for example—but this is not a new development, it’s the way things have been since the start of the industrial revolution.

  • Ben

    It seems to me that you’re assumptions about this event are slightly off. (Talking about the singularity…didn’t read the interview yet.)

    1. My interpretation is that these machines would have a way to simulate/emulate the human experience….thus exceeding our intelligence not our parallel computations

    2. Of course we can’t predict 2040….but we can do alright at 2009….most idea and/or stories I’ve read seem to bring predictability down to weeks or days

    3 Not understanding the way google does something and not being able to understand are very different.

    As for me….I think by the time it happens we’ll have wetware and we’ll be the machines.

  • Ben

    It seems to me that you’re assumptions about this event are slightly off. (Talking about the singularity…didn’t read the interview yet.)

    1. My interpretation is that these machines would have a way to simulate/emulate the human experience….thus exceeding our intelligence not our parallel computations

    2. Of course we can’t predict 2040….but we can do alright at 2009….most idea and/or stories I’ve read seem to bring predictability down to weeks or days

    3 Not understanding the way google does something and not being able to understand are very different.

    As for me….I think by the time it happens we’ll have wetware and we’ll be the machines.

  • Timon

    I like the phrase “The rapture of the nerds.” The bumper sticker could be something like “In case of singularity, the driver of this car will be in his laptop.”

  • Timon

    I like the phrase “The rapture of the nerds.” The bumper sticker could be something like “In case of singularity, the driver of this car will be in his laptop.”

  • http://enigmafoundry.wordpress.com/2007/08/18/its-only-censorship-so-whats-the-problem/ enigma_foundry

    tim:

    I belief that Bill Joy had a certain understanding of likely dangers that attend this belief in a ‘singularity’ See his “The future doesn’t need us” Wired Magazine:

    http://www.wired.com/wired/archive/8.04/joy.html

    There is one way that this will become dangerous, though (and I think we are still much further away from the singularity than most people realize–thinking is very complex and variable; to wit: humans looking at 50 possible chess moves were still beating computers that saw many billions)–no humans will be competent to judge the decisions made by machines, and the machines themselves will begin to design new processes and algorithms, unimagined and utterly incomprehensible to anyone with an interest in the outcomes. We will have created the possibility–Paul Virilio would say inevitability–of a failure that is incomprehensible, and the solutions to which would be likewise incomprehensible.

    But it still is Science Fiction, not Science Fact…

  • http://enigmafoundry.wordpress.com eee_eff

    tim:

    I belief that Bill Joy had a certain understanding of likely dangers that attend this belief in a ‘singularity’ See his “The future doesn’t need us” Wired Magazine:

    http://www.wired.com/wired/archive/8.04/joy.html

    There is one way that this will become dangerous, though (and I think we are still much further away from the singularity than most people realize–thinking is very complex and variable; to wit: humans looking at 50 possible chess moves were still beating computers that saw many billions)–no humans will be competent to judge the decisions made by machines, and the machines themselves will begin to design new processes and algorithms, unimagined and utterly incomprehensible to anyone with an interest in the outcomes. We will have created the possibility–Paul Virilio would say inevitability–of a failure that is incomprehensible, and the solutions to which would be likewise incomprehensible.

    But it still is Science Fiction, not Science Fact…

  • http://joshmaurice.livejournal.com Josh Maurice

    Things do seem to me to be complexifying so fast that we’re headed for, as Bucky Fuller titled one of his books, “Utopia or Oblivion” within a few minutes, months, or years.

    Please see http://joshmaurice.livejournal.com where I just posted some ideas for a coding project, maybe to comprise only 6 or 7 lines of code, that could produce much more fluidly flowing graphical Internet interfaces, enabling much more rapid, continuous feedback among millions of Internetizens.

  • http://joshmaurice.livejournal.com Josh Maurice

    Things do seem to me to be complexifying so fast that we’re headed for, as Bucky Fuller titled one of his books, “Utopia or Oblivion” within a few minutes, months, or years.

    Please see http://joshmaurice.livejournal.com where I just posted some ideas for a coding project, maybe to comprise only 6 or 7 lines of code, that could produce much more fluidly flowing graphical Internet interfaces, enabling much more rapid, continuous feedback among millions of Internetizens.

  • Pingback: choose 1300 number

  • Pingback: mysterie football

Previous post:

Next post: