Few modern intellectuals gave more serious thought to forecasting the future than Herman Kahn. He wrote several books and essays imagining what the future might look like. But he was also a profoundly humble man who understood the limits of forecasting the future. On that point, I am reminded of my favorite Herman Kahn quote:
History is likely to write scenarios that most observers would find implausible not only prospectively but sometimes, even in retrospect. Many sequences of events seem plausible now only because they have actually occurred; a man who knew no history might not believe any. Future events may not be drawn from the restricted list of those we have learned are possible; we should expect to go on being surprised.[1]
I have always loved that phrase, “a man who knew no history might not believe any.” Indeed, sometimes the truth (how history actually unfolds) really is stranger than fiction (or the hypothetical forecasts that came before it.)
This insight has profound ramifications for public policy and efforts to “plan progress,” something that typically ends badly. No two scholars nailed that point better in their work than Karl Popper and F.A. Hayek, two of the preeminent philosophers of science and politics of the 20th century. Popper cogently argued that the problem with “the Utopian programme” is that “we do not possess the experimental knowledge needed for such an undertaking.”[2] “Progress by its very nature cannot be planned,” Hayek taught us, and the wiser man “is very much aware that we do not know all the answers and that he is not sure that the answers he has are certainly the right ones or even that we can find all the answers.”[3] One hundred years prior to Hayek making that insight, social philosopher Herbert Spencer explained how humans would only be truly wise once they fathomed the limits of their own knowledge:
In all directions his investigations eventually bring him face to face with the unknowable; and he ever more clearly perceives it to be the unknowable. He learns at once the greatness and the littleness of human intellect—its power in dealing with all that comes within the range of experience; its impotence in dealing with all that transcends experience. He feels more vividly than any others can feel, the utter incomprehensibleness of the simplest fact, considered in itself. He alone truly sees that absolute knowledge is impossible. He alone knows that under all things there lies an impenetrable mystery.[4]
Unfortunately, most humans suffer from what Nassim Nicholas Taleb calls “epistemic ignorance” or hubris concerning the limits of our knowledge. “We are demonstrably arrogant about what we think we know,” he says.[5] “We overestimate what we know, and underestimate uncertainty.”[6] “There are no crystal balls, and no style of thinking, no technique, no model will ever eliminate uncertainty,” argues Dan Gardner, author of Future Babble: Why Expert Predictions Are Next to Worthless. “The future will forever be shrouded in darkness. Only if we accept and embrace this fundamental fact can we hope to be prepared for the inevitable surprises that lie ahead,” Gardner notes.[7]
This is why attempts to forecast the future so often end in folly. The great Austrian school economist Israel Kirzner spoke of “the shortsightedness of those who, not recognizing the open-ended character of entrepreneurial discovery, repeatedly fall into the trap of forecasting the future against the background of today’s expectations rather than against the unknowable background of tomorrow’s discoveries.”[8] This is especially true as it pertains to technological change and change in information markets. “Anyone who predicts the technological future is sure soon to seem foolish,” noted technology scholar George Gilder. “It springs from human creativity and thus inevitably consists of surprise.”[9]
Knowledge of history and historical trends can help inform our decisions and predictions, yet they are insufficient to accurately forecast all that may come our way. “The past seldom obliges by revealing to us when wildness will break out in the future,” observes Peter L. Bernstein, author of Against the Gods: The Remarkable Story of Risk.[10]
We can relate these lessons to Internet policy and digital economics. A largely unfettered cyberspace has left digital denizens better off in terms of the information they can access as well as the goods and services from which they can choose. In true Schumpeterian fashion, “information empires” have come and gone in increasingly rapid progression.[11] There are countless “tech titans” that, for a time, seemed to rule their respective sectors only to experience a precipitous fall.[12] Indeed, if you blink your eyes in the information age, you can miss revolutions.[13]
“The challenge of disruptive innovation,” observes technology lawyer Glenn Manishin, is that “it forces market participants to rethink their premises and reimagine the business they are in. Those who get it wrong will be lost in the dustbin (or buggy whip rack) of history. Those who get it right typically enjoy a window of success until the next inflection point arrives.”[14] And the cycle Manishin describes just keeps repeating faster and faster throughout modern information sectors.
This explain why, just as planning and forecasting often fail in a macro sense, they also fail in the micro sense as industries and analysts repeatedly fail to accurately identify future trends and marketplace developments. “Markets that don’t exist can’t be analyzed,” observes Clayton M. Christensen, author of The Innovator’s Dilemma. “In dealing with disruptive technologies leading to new markets,” he finds, “researchers and business planners have consistently dismal records.”[15] Simply put, as Yogi Berra once famously quipped: “It’s tough to make predictions, especially about the future.”
The ramifications for public policy are clear. Patience and humility are key virtues since, as economist Daniel F. Spulber correctly writes, “Governments are notoriously inept at picking technology winners or steering fast-moving markets in superior directions. Understanding technology requires extensive scientific and technical knowledge. Government agencies cannot expect to replicate or improve upon private sector knowledge. Technological innovation is uncertain by its very nature because it is based on scientific discoveries. The benefits of new technologies and the returns to commercial development also are uncertain.”[16]
Policymakers would be wise to heed all this advice before trying to “plan progress,” especially in highly-dynamic, rapidly-evolving technology sectors. As the old saying goes, the best-laid plans of mice and men often go awry.
[1] Herman Kahn and Anthony Wiener, The Year 2000: A Framework for Speculation on the Next Thirty-three Years (New York: Macmillan, 1967), 264-5.
[2] Karl Popper, The Poverty of Historicism, (London: Routledge, 1957, 2002), 77.
[3] Hayek, The Constitution of Liberty, 41, 406. Similarly, political scientist Vincent Ostrom has observed that, “Human beings face difficulties in dealing with a world that is open to potentials for choice but always accompanied by basic limits. There is much about the mystery of being that cannot be known by mortal human beings.” Vincent Ostrom, The Meaning of Democracy and the Vulnerability of Democracies (Ann Arbor: The University of Michigan Press, 1997), 29-30.
[4] Herbert Spencer, “Progress: Its Law and Cause,” (April 1857), republished in Essays: Scientific, Political, and Speculative (London: Williams and Norgate, 1891), available at: http://oll.libertyfund.org/index.php?option=com_staticxt&staticfile=show.php%3Ftitle=335&chapter=12314&layout=html
[5] Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007), 138.
[6] Id., 140. Similarly, Cognitive psychologist Daniel Kahneman speaks of, “Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.” Daniel Kahneman, Thinking Fast and Slow (New York: Farrar, Straus and Giroux, 2011) 201.
[7] Dan Gardner, Future Babble: Why Expert Predictions Are Next to Worthless, and You Can Do Better (New York: Dutton, 2011), 16-17.
[8] Israel Kirzner, Discovery and the Capitalist Process (University of Chicago Press, 1985), at xi.
[9] George Gilder, Wealth & Poverty, 101.
[10] Peter L. Bernstein, Against the Gods: The Remarkable Story of Risk (New York: John Wiley & Sons, Inc., 1996) 334.
[11] “Each era of computing seems to run for about a decade of total dominance by a given platform. Mainframes (1960-1970), minicomputers (1970-1980), character-based PCs (1980-1990), graphical PCs (1990-2000), notebooks (2000-2010), smart phones and tablets (2010-2020?). We could look at this in different ways like how these devices are connected but I don’t think it would make a huge difference. Now look at the dominant players in each succession – IBM (1960-1985), DEC (1965-1980), Microsoft (1987-2003), Google (2000-2010), Facebook (2007-?). That’s 25 years, 15 years, 15 years, 10 years, and how long will Facebook reign supreme? Not 15 years and I don’t think even 10. I give Facebook seven years or until 2014 to peak.” Robert Cringely, “The Decline and Fall of Facebook,” I, Cringely, July 20, 2011, http://www.cringely.com/2011/07/the-decline-and-fall-of-facebook.
[12] Adam Thierer, “Of ‘Tech Titans’ and Schumpeter’s Vision,” Forbes, August 22, 2011, http://www.forbes.com/sites/adamthierer/2011/08/22/of-tech-titans-and-schumpeters-vision.
[13] Megan Garber, “The Internet at the Dawn of Facebook,” The Atlantic, May 17, 2012, http://www.theatlantic.com/technology/archive/2012/05/the-internet-at-the-dawn-of-facebook/257342.
[14] Glenn Manishin, “Of Buggy Whips, Telephones and Disruption,” DisCo, June 25, 2012, http://www.project-disco.org/competition/of-buggy-whips-telephones-and-disruption.
[15] Clayton M. Christensen, The Innovator’s Dilemma (New York: Harper Business Essentials, 1997), xxv.
[16] Daniel F. Spulber, “Unlocking Technology: Antitrust and Innovation,” 4(4), Journal of Competition Law & Economics, 915–96, 965. (May 2008).