Luis Villa has an interesting post about the evolving understanding of open source software:
I’ve long thought that in open source software we are seeing a trend away from trust in an institution (think: Microsoft) and towards trust in ‘good luck’- i.e., in the statistical likelihood that if you fall, someone will catch you. In open source, this is most manifest in support- instead of calling a 1-800 # (where someone is guaranteed to help you, as long as you’re willing to be on hold for ages and pay sometimes very high charges), one emails a list, where no one is responsible for you, but yet a great percentage of the time, someone will answer anyway. There is no guarantee, but community practices evolve to make it statistically likely that help (or bug fixing, or whatever) will occur. The internet makes this possible- whereas in the past if you wanted free advice, you had to have a close friend with the right skills free time, you can now draw from a much broader pool of people. If that pool is large enough (and in software, it appears to be) then it is a statistical matter that one of them is likely to have both the right skills and the right amount of free time.
Clay Shirky today makes an argument that this isn’t just something that is occurring in open source, but is hitting other fields of expertise as well: “My belief is that Wikipedia’s success dramatizes instead a change in the nature of authority, moving from trust inhering in guarantees offered by institutions to probabilities created by processes.” Instead of referring to a known expert to get at knowledge, you can ask Wikipedia- which is the output of a dialectic process which may fail in specific instances but which Clay seems to suggest can be trusted more than any one institution’s processes in the long run.
This is an excellent point, but it’s actually not a new one. Two examples that immediately spring to mind are Darwin’s Origin of the Species and Friederich Hayek’s The Road to Serfdom (and, more specifically, his subsequent essay “The Use of Knowledge in Society” ). Darwin and Hayek each described decentralized processes in which the correctness of the result is produced by statistical processes, rather than by the good judgment of a trusted authority.
In Darwin’s case, of course, the trusted authority was God, and the statistical process was natural selection. In Hayek’s case, the trusted authority was the state, whom socialist intellectuals believed could plan a nation’s economy better than the chaos of the market. In both cases, the central insight was that the problem at hand was too big for any one intelligence to solve, but they explained how the problem could be solved by impersonal, statistical processes that might fail in many individual cases, but in the long run will find better solutions that centralized planning could.
It seems to me that the critics of peer produced works have arguments that closely mirror the arguments of Darwin and Hayek’s critics. A few quick examples:
I think the root cause of these kinds of fallacious arguments is that our brains are not wired to think social systems in statistical terms. In the primitive tribes that shaped the human brain, decisions were made by a tribal leader, and peoples’ confidence in the correctness of the decisions was based on their personal trust in the judgment of the chief. We’re conditioned to ask who’s in charge, and so we find the notion that nobody is in charge to be very disconcerting.
I also think the socialist example illustrates another important point: even smart and well-educated people get these sorts of questions wrong. Hayek dedicated his book to “socialists of all parties” because the leading intellectuals of the 1940s tended to be socialists. Spontaneous order is counter-intuitive, whether we’re talking about life, markets, or Wikipedia. As peer production becomes a more important part of the economy, explaining how they work to non-participants will be an increasingly important challenge.
Comments on this entry are closed.