Singularity Summit 09

by on October 3, 2009 · 9 comments

Is going on this weekend in New York.  Check out the program here. I can’t wait to see all the video! Also check out the suggested reading list—in particular, Why Work Toward the Singularity.  Here’s a teaser:

Why is the Singularity worth doing? The Singularity Institute for Artificial Intelligence can’t possibly speak for everyone who cares about the Singularity. We can’t even presume to speak for the volunteers and donors of the Singularity Institute. But it seems like a good guess that many supporters of the Singularity have in common a sense of being present at a critical moment in history; of having the chance to win a victory for humanity by making the right choices for the right reasons. Like a spectator at the dawn of human intelligence, trying to answer directly why superintelligence matters chokes on a dozen different simultaneous replies; what matters is the entire future growing out of that beginning.

But it is still possible to be more specific about what kinds of problems we might expect to be solved. Some of the specific answers seem almost disrespectful to the potential bound up in superintelligence; human intelligence is more than an effective way for apes to obtain bananas. Nonetheless, modern-day agriculture is very effective at producing bananas, and if you had advanced nanotechnology at your disposal, energy and matter might be plentiful enough that you could produce a million tons of bananas on a whim. In a sense that’s what nanotechnology is – good-old-fashioned material technology pushed to the limit. This only begs the question of “So what?”, but the Singularity advances on this question as well; if people can become smarter, this moves humanity forward in ways that transcend the faster and easier production of more and more bananas. For one thing, we may become smart enough to answer the question “So what?”

In one sense, asking what specific problems will be solved is like asking Benjamin Franklin in the 1700s to predict electronic circuitry, computers, Artificial Intelligence, and the Singularity on the basis of his experimentation with electricity. Setting an upper bound on the impact of superintelligence is impossible; any given upper bound could turn out to have a simple workaround that we are too young as a civilization, or insufficiently intelligent as a species, to see in advance. We can try to describe lower bounds; if we can see how to solve a problem using more or faster technological intelligence of the kind humans use, then at least that problem is probably solvable for genuinely smarter-than-human intelligence. The problem may not be solved using the particular method we were thinking of, or the problem may be solved as a special case of a more general challenge; but we can still point to the problem and say: “This is part of what’s at stake in the Singularity.”

The rest of the essay is worth reading.

Previous post:

Next post: