Narzędzia osobiste
Jesteś w: Start Groups Strefa dla członków PTKr Teoria inteligentnego projektu 2005 Mark Perakh, "The Dream World of William Dembski's Creationism" (2005)

Mark Perakh, "The Dream World of William Dembski's Creationism" (2005)

"Talk Reason" http://www.talkreason.org/articles/Skeptic_paper.cfm

The  Dream World of William Dembski's Creationism

By Mark Perakh

The inordinately well-financed Center for Science and Culture of the Discovery Institute of Seattle is the home of the new anti-evolution gang. They fight for modifying the school curricula by inserting creationism as an alternative to evolution, or for what they euphemistically call "teaching the controversy," yet shrug off the label of creationism, calling themselves instead Intelligent Design (ID) theorists.

Repeated defeats of creationists by the US legal system has forced them to regroup and look for new strategies. ID advocates sport scientific degrees from good universities and often display substantial erudition and seeming sophistication much exceeding that of earlier creationists. Since ID purports to be a scientific enterprise, they need flag bearers with seemingly impressive scientific credentials, if not actual scientific achievements. Foremost among IDers is William A. Dembski, with a long list of degrees including a Ph.D. in mathematics, a Ph.D. in philosophy, and a Master's degree in theology. [1]

Dembski's many degrees and scores of published books and papers cannot conceal, however, that he has never conducted real scientific research. Moreover, Dembski's literary production contains no real mathematics but instead a lot of philosophizing, often saturated with unnecessary mathematical symbolism. As his extensive literary production is critiqued by experts, Dembski, without admitting errors, often surreptitiously shifts his position. These tactics may be handy if winning the battle regardless of means is the only goal, but they also lead to the inconsistency that has become Dembski's trademark.

In this article I shall concentrate on the most salient features of Dembski's prolific literary output, almost all of which turns out to be poorly substantiated, contradictory, and often self-aggrandizing.                         

Dembski's Literary Output, Its Admirers and Detractors

Dembski's co-travelers frequently and excessively acclaim his works as great breakthroughs in science, even to the extent of comparing him to Isaac Newton. [2] His works, however, have also been extensively criticized. [3]  Dembski's latest book is The Design Revolution: Answering the Toughest Questions about Intelligent Design. [4] Contrary to what this title might suggest, we find in that book's Index none of the names of his harshest critics. Dembski's book more properly should have been subtitled Dodging the Toughest Questions about Intelligent Design.

Dembski is selective in deciding which critique to respond to and which to ignore. For example, his (mis)use of the No Free Lunch (NFL) theorems (discussed below) was subjected to a strong critique by Wein [5] and Wolpert. [6] In two lengthy rebuttals [7] Dembski spared no effort to reply to Wein, but he never uttered a word in response to Wolpert. It is not hard to understand why. Wein, as Dembski stresses in his posts, has only a bachelor's degree in statistics. This irrelevant factoid, according to Dembski, makes Wein insufficiently qualified to dispute Dembski's work (in fact Dembski evaded answering the substance of Wein's well justified critical comments). Dembski could not use such silly arguments against Wolpert, because Wolpert is a highly respected mathematician and a co-author of the very NFL theorems Dembski misuses.

I also have written at length about Dembski's ideas. [8] Dembski, who is aware of my critique [9] has never responded to it. In scientific debates, if one of the disputants fails to respond, it is usually construed as a silent admission of errors.

If Dembski responded to my critique, he could say that I unduly simplify his sophisticated arguments, thus depriving them of their real deeper meaning. Indeed, I deliberately simplify his arguments because to my mind all their seeming sophistication is just a smoke screen intended to make his often hackneyed notions look like important insights where there is none. I see one of my tasks, when discussing Dembski's work, as removing its veneer of sophistication and laying bare his real underlying notions.

Explanatory Filter

Dembski's Explanatory Filter (EF) is, he claims, a reliable tool for discriminating between those events that are results of "Intelligent Design" and those that happen either by chance or due to necessity (also called "law" or "regularity"). He has published a description of the EF in four books and a number of articles, [10] not to mention frequent posts to the Internet, so this is obviously an important component to his ID theory. So far, however, there are no reported instances of a successful application of the EF by anybody, including Dembski's colleagues or Dembski himself, to any specific problem where design may be suspected as an event's antecedent. It is not hard to understand why.

According to Dembski, there are three and only three clearly distinctive categories of antecedent factors (causes) for any event: necessity (also called law or regularity), chance, and Intelligent Design. The term "intelligent" in Dembski's interpretation does not necessarily imply that the designer possesses a high intelligence; he (she, it?) even can be stupid; [11] the term "intelligent" implies only that, regardless of its optimality, design is a product of an "intelligent agent" rather than of chance or necessity.

Many events result from a combination of more than one cause. [12] The notion of only three separate causes does not jibe with reality. Consider Dembski's own favorite case of an archer shooting arrows at a target painted on a wall. According to Dembski, if the archer hits the bull's-eye, it is a result of design (the archer's skill) and only of design. In fact, the archer's skill (design) assures only a certain velocity of the arrow when it leaves the bow. To hit the bull's-eye, the arrow must also follow the laws of mechanics, which determine its trajectory from the bow to the target. Hence, if the arrow hits the bull's-eye, it is the result of a combination of design and regularity (law, necessity). Similarly, because of an occasional gust of wind, chance also may become a factor. All three causes can act simultaneously. [12]

Dembski's EF does not account for events that have more than one cause, and that is one of the reasons EF is an inadequate tool for the design inference. There are, however, more serious fallacies in the EF.

The EF is a flow-chart comprising three "nodes" that correspond to three steps employed to decide whether an event is due to chance, necessity, or design. In fact, of the three "nodes," the first and the second cannot be practically used. According to Dembski, in the first node of the EF, the probability of the event is to be estimated. If it is found to be "large" (Dembski offers no quantitative bounds for such a determination) the event is attributed to law (necessity, regularity); if the probability is not "large," necessity is rejected, and the event passes to the second node of the EF.

In Dembski's imaginary procedure, an event is attributed to necessity (law, regularity) because its probability is found to be "large." Since probability can't be "read off the event," in practice we can only do it in the reverse order — first determine that the event is caused by necessity (law, regularity) and therefore conclude that its probability is large.

Likewise in the second node of the EF, Dembski's schema requires us to re-estimate the event's probability. If it is found to be "intermediate" (Dembski again provides no quantitative bounds for such a determination), the event is attributed to chance. If, though, the probability in question is "small," the event passes to the EF's third node. However, to estimate the probability now, we need certain knowledge about the event's causal history, as we did in the first node.  Dembski's schema prescribes the unrealistic, sequence of steps—an event's probability is found to be intermediate (how?); therefore we attribute it to chance. This schema is impossible to apply, because we can't read probability "off the event." Since the first and the second nodes of Dembski's EF make no sense, the EF cannot be used in real-life situations.

In the third node of the EF, the final choice is to be made between chance and design. (Again, the possibility of a combination of causes is ignored, as well as the not-so-rare situations wherein no attribution is feasible because of the paucity of information.) Dembski's prescription here differs from the two preceding nodes. First, the probability of the event is estimated assuming it happened by chance (the procedure here is opposite to that suggested for the first and the second nodes). If this probability is not very "small," the event is attributed to chance.

Dembski has suggested a Universal Probability Bound (UPB) [13] which he chose to be UPB=1/2×10-150. This threshold translates into about 500 bits of information (discussed later). If the probability of a specified event is over the UPB, that is, if the information associated with this event is over 500 bits, the event, according to Dembski, cannot be attributed to chance.

 I will not discuss here Dembski's reasoning for his choice of the particular value of UPB (although this reasoning has a number of dubious elements) because this value per se is not of a principal significance for this discourse.

Regarding how small the probability has to be to qualify as a condition for a design inference, Dembski is inconsistent. On the one hand, he seems to prescribe using his UPB as the threshold of a sufficiently small probability. On the other hand, in many of his examples, he views the probability as small enough for inferring design even when it is by orders of magnitude larger than the UPB; for example, he considers a seven-digit phone number as sufficiently improbable to justify a design inference although the probability of this number is immensely larger than the UPB. [14]

According to Dembski, if the probability of a chance occurrence of an event is found to be small, then the event must next be tested for specification. Let us construe specification in its most common sense—as a choice of an object out of a set of similar objects. For example, if you are asked to pull "a card" from a deck, the card is not "specified." Any card you randomly choose will do. If, though, you are asked to pull the seven of spades from a deck, this time the card is specified, and only the seven of spades will do. This simple example shows the commonly understood difference between specified and unspecified objects.

What is the probability that whichever card you randomly choose will turn out to be "a card"? Obviously this probability is p=1 (or 100%), because any card you choose meets the definition of being "a card." What is the probability that the card you randomly pull from a deck will turn out to be the seven of spades? Since there are 52 cards in the deck, each having the same chance of being randomly chosen, the probability in question this time is 1/52. Hence, in this example specification makes the event's probability 52 times lower than for an unspecified event.

In general, specification always decreases the probability of an event. There is no basis for construing specification as a category qualitatively independent of the event's probability (as Dembski suggests). Specification (if any is detected) adds only quantitatively to the probability estimate but warrants no qualitatively different contribution to the putative design inference.

Dembski imposes certain restrictions on specification if it is to be used for a design inference [15]. Therefore Dembski's specification is a narrower concept than the one used in the discussion above. Not all events that are specified in the above broad sense are specified in Dembski's sense. However, all events that are specified in Dembski's sense all are also specified in the above broader sense. Therefore, applying the concept of specification only in Dembski's narrower sense does not invalidate the assertion that specification quantitatively decreases the event's probability rather than adds a qualitatively different factor to the design inference.

Therefore the procedure suggested by Dembski for the third node of his EF boils down to the estimate of the event's probability, either directly or disguised as specification. The design inference is thus reduced to an argument from improbability.

Furthermore, probability is a quantity whose estimate, as Dembski himself asserts, [16] is determined by the knowledge about the event in question (in Dembski's parlance, by the "background information H"). Obviously the other side of the coin is the assertion that the estimated probability reflects the level of our ignorance about the event. Dembski's discourse, including his EF, is just a feebly disguised argument from ignorance; its other name is the God-of-the gaps argument, which has lost credibility even among Dembski's philosophical colleagues. [17]

Dembski admits that EF can produce false negatives, that is, fail to detect ID where ID is in fact present. He insists, however, that his EF does not produce false positives, that is, if it detects design, this result is beyond doubt. To prove this assertion, Dembski offers two lines of proof.

1. Dembski's first proof of EF's reliability in regard to false positives is a  "...straightforward inductive argument: in every instance where the explanatory filter attributes design and where the underlying causal history is known, it turns out design is present; therefore design is actually present whenever the explanatory filter attributes design." [18]

First, as philosopher Dembski must know, if A entails B, B does not necessarily entail A. Even if his assertion (that the explanatory filter correctly infers design whenever an event is known to be caused by ID) were true, that fact in itself would not necessarily lead to the reverse conclusion (that each time explanatory filter attributes design, intelligence is indeed the causal antecedent of the observed event). In fact, though, Dembski does not substantiate even the underlying statement (for example, by providing a more or less extensive record of those situations where the causal history is known and filter infers design). He discusses just a few examples, and we cannot know whether or not these examples were selected at random or chosen deliberately (cherry-picked) because they seemed to fit his thesis. Moreover, this alleged proof can be shown to be false by simply pointing to cases where EF clearly produces false positives. There are many examples of such false positives [19] and we will discuss one more (the case of triangular snowflakes) below.An early example of false positives came from the ID camp itself, from a prominent proponent of ID, philosopher of science Del Ratzsch. [20] Dembski never addressed Ratzsch's example; neither did he acknowledge the examples of false positives suggested by other critics of his work, while continuing to insist that his EF never produces false positives. [21]

2. Dembski second "proof" of the EF's reliability is: "The Explanatory Filter is a reliable criterion for detecting design because it coincides with how we recognize intelligent causation generally." [22] If this is so, why do we need EF as we can detect design without it, and know how to do it, and do it "generally"? On the other hand, if his EF is indeed a novel tool for detecting design, which is superior to "how we do it generally," how can its reliability be asserted by comparing it to an inferior procedure used without it?

Both "proofs" of EF's reliability suggested by Dembski are in fact not convincing.

Specified Complexity

Dembski pays much attention to specification as such, apart from its role in the EF, and he employs a number of alternative terms, such as Complex Specified Information (CSI), Specified Complexity (SC), and sometimes simply specification, often shifting the meaning he attributes to these terms.

When using the term specification, Dembski explains that it means a certain type of pattern. [23] To serve as a specification, says Dembski, the pattern must meet a set of additional requirements. The most important among those requirements is perhaps "detachability." For example, if you see a heap of stones arranged in a certain pattern that is not familiar to you, this pattern is not "detachable" from this specific heap of stones—it does not match any image you have antecedently stored in your memory. If, though, an astronomer comes across the same heap of stones, and recognizes in it a pattern reproducing the shape of a certain constellation, the image of which is familiar to him, then for him this pattern is "detachable" from the particular heap of stones and serves as a specification (that is, can lead to the design inference: the conclusion that some intelligent agent has intentionally placed the stones in the image of a constellation). [24] This example, however, shows that Dembski's specification is no more than a subjective recognition of the pattern. (In this example, as well as in many other instances discussed by Dembski, the probability that the pile of stones has the shape of a constellation may be many orders of magnitude larger than UPB=1/2 ×10-150 ; this is just one of the many examples of Dembski's inconsistency.) Another necessary component of specification, according to Dembski, is complexity, which he construes as tantamount to low probability: "Probability measures are disguised complexity measures...with the disguise involving nothing more than a change in direction and scale." Similarly, "the greater the complexity, the smaller the probability" [25]

I believe that the very concept of complexity as disguised improbability is contrary to facts and logic. For example, under certain (rare) weather conditions, an unusual triangular shape of snowflakes can be observed. [26] Unlike more common forms of snowflakes with their intricately complex structure, these rare snowflakes have a simple structure. As Dembski asserted, [27] snow crystals' shapes are due to necessity — the laws of physics predetermine their appearance. However, triangular snowflakes, while indeed predetermined by laws of physics, occur only under certain weather conditions, which are very rare and unpredictable. Therefore we have to conclude that the emergence of the triangular snowflakes is a random event. This is another example where at least two causal antecedents—chance and law—are in play simultaneously.

Since the appropriate weather conditions occur very rarely, the probability of the chance emergence of the triangular snowflakes is very small; also, they have a uniquely specific shape. Hence, according to the EF, these snowflakes were deliberately designed. The more reasonable conclusion, however, is that they appeared by chance (plus the necessary contribution of law). (This is also another example of a false positive produced by the EF.) Since the probability of the occurrence of these snowflakes is small, then, according to Dembski's insistence that large complexity is equivalent to low probability, their complexity must be large. In fact, though, the rare triangular snowflakes have the simplest form among all the snowflakes observed. Thus, Dembski's thesis asserting that complexity is tantamount to small probability is an unsubstantiated and therefore misleading suggestion.

Complex Specified Information

Dembski's  Complex Specified Information (CSI) is in fact a combination of low probability (which he construes as tantamount to complexity) with a recognizable pattern. For example, according to Dembski, a string of gibberish displays no CSI, even if its spontaneous emergence has a very low probability, but a segment of a meaningful text possesses CSI because not only is its spontaneous emergence improbable but it also displays a recognizable pattern.

In its formal rendition, Dembski's Complex Specified Information (CSI) comprises three components, (a) information, (b) complexity and (c) specification.

(a)   Information. Dembski's definition of information is [28] 

 

                                              I(E)= -log2 p(E) ...........(1)

where I stands for information associated with an individual event E, p is the probability of that event, and the logarithm is to the base of 2. In information theory, I is often called surprisal; another, more recent term is self-information. [29]

The definition (1) simply expresses probability in a logarithmic form. In this rendition, the concept of information contains nothing beyond the concept of probability.

If a definition has been selected, it has to be applied consistently. However, having chosen (1) as a definition of information, a few pages further [30] Dembski refers to the same quantity I as complexity. If I is complexity, then (1) contradicts Dembski's own earlier definition of complexity [31] as a measure of a difficulty of solving a problem, since (1) has nothing to do with the difficulty of a problem.

(b) Complexity, as we have seen, in Dembski's view is the equivalent of low probability.

(c) Specification likewise cannot be viewed as a category qualitatively distinctive from probability (as discussed earlier).

Hence, all three components of CSI are in fact just components of the overall probability of the event whose causal history is in question. Dembski's use of the concept of CSI, and with it his "complexity-specification criterion," [32] are just a re-phrased argument from improbability, or, therefore, as noted above, an argument from ignorance, to which Dembski has added no novel features besides unnecessary mathematical symbolism.

The Law of Conservation of Information

Dembski's penchant for idiosyncratic terms and allegedly revolutionary novel concepts perhaps finds its most salient expression in his Law of Conservation of Information (LCI). Experts in information theory have so far paid no attention to Dembski's supposedly revolutionary breakthrough, [2] so that there are no references to Dembski's LCI in any books or papers professionally dealing with information theory.

  The most concise rendition of LCI by Dembski is perhaps :"Natural causes are incapable of generating CSI." And Dembski's first corollary: "The CSI in a closed system of natural causes remains constant or decreases," [33] may serve as an alternative rendition of LCI insofar as it is relevant to our discussion.

Dembski does not define "closed system of natural causes." In particular, it is unclear whether it includes human intelligence, which obviously can generate CSI but usually is not considered supernatural. Regardless, I will show that LCI, if followed consistently using Dembski's notions, contradicts the second law of thermodynamics. (Apparently not satisfied with introducing a new "law" regarding information, Dembski suggests that his LCI can be expanded to become the Fourth Law of Thermodynamics. [34])

Let us see whether Dembski's various statements, if applied consistently, lead to a conclusion compatible with the second law of thermodynamics.  To this end, let us juxtapose several of Dembski's statements relevant to his LCI.

(a) Dembski uses the concept of entropy H according to Shannon's information theory, asserting that "the average information per character in a string is given by entropy H." [35] That is, Dembski maintains that

 average information = entropy.

(b) CSI, by definition, is a combination of three components: information, specification, and complexity.

From (a) and (b) follows that the behavior of CSI (which includes information as its necessary component) must not contradict those laws which determine the behavior of entropy. Entropy obeys the second law of thermodynamics according to which in a closed system entropy cannot spontaneously decrease. Since Dembski accepts that "entropy = average information" (see above), he must conclude that average information associated with a closed system cannot spontaneously decrease; it can only increase or remain unchanged.

A thermodynamically closed system, by definition, does not exchange matter and/or energy with its surrounding. A system closed in the informational sense does not exchange information with its surrounding. Although matter/energy and information are not the same, the behavior of both in many respects can be characterized by the same quantity — entropy. The units used for thermodynamic entropy differ from informational entropy, but that is due only to convenience; entropy is essentially a dimensionless quantity [36] whose behavior is determined by the same laws both for thermodynamic and informational entropy.

Hence, while Dembski's LCI asserts that CSI cannot increase in his "closed system of natural causes," his other notions, if combined with the second law of thermodynamics, assert that average information (a.k.a. entropy) in a closed system cannot decrease. These two statements are incompatible.

Specification—one of the components of CSI—is, in Dembski's rendition, a qualitative concept. An event can either be specified or not; there are no degrees of specification (see, however, another view [37]). Two other components of CSI—information and complexity—are quantitative; however complexity, according to Dembski, is just a property of information when the latter exceeds about 500 bits. Therefore the increase or decrease of CSI necessarily implies the increase or decrease of its constituent information (because the third component of CSI — specification, is only a qualitative concept). Hence, if we talk about an increase or decrease of CSI, then, in accordance with Demsbki's concepts, we necessarily talk about an increase or decrease of information, and therefore of entropy (which is just average information – see above).

Whereas the second law of thermodynamic prohibits entropy's decrease, Dembski's LCI allows for its decrease but prohibits its increase. I see no way to reconcile Dembki's LCI with the second law of thermodynamics. This seems to be a sufficient reason (albeit not the only reason) to assert that Dembski's alleged fourth law of thermodynamics, which is supposed to be a generalization of his LCI, makes no sense.

No Free Lunch?

The title (No Free Lunch) of Dembski's 2002 book refers to certain theorems of optimization theory [38] that Dembski asserts make evolution by a Darwinian path impossible. For example, "The No Free Lunch theorems dash any hope of generating specified complexity via evolutionary algorithms" (p. 196). Similarly, "The No Free Lunch theorems show that evolutionary algorithms, apart from careful fine-tuning by a programmer, are no better than blind search and thus no better than pure chance" (p. 212), and "The No Free Lunch theorems show that for evolutionary algorithms to output CSI they had first to receive a prior input of CSI" (p. 223).

There are several NFL theorems, the most relevant for our discussion being the "first NFL theorem for search" (NFL-1).

At the heart of the NFL theorems are two concepts: a fitness function (or its opposite, a cost function), and search algorithms. A fitness function (often a synonym of "figure of merit") is a quantitative characteristic of a system that is related to its functioning or its usefulness, or is of interest for any other reason. For example, the fitness function may list the heights of peaks in some mountainous region as a function of their locations. The physical relief of the mountainous region is an example of a fitness landscape. Imagine that we want to find the highest peak in that region. Search algorithms are the sequences of steps to be taken in the search for the highest peak. The search may be directed toward a certain target—say, a peak which is 6,000 meters above the sea level. In this case the search is terminated when the target, the 6,000 meter high peak, has been located. Equally the search may not be directed toward a pre-selected target but may be conducted until, say, a pre-selected number of peaks have been explored, and then the search is terminated regardless of how tall the last conquered peak turns out to be.

NFL-1 is equally valid for the first (targeted) and the second (non-targeted) searches. Algorithms are strategies employed for the search. One strategy may be climbing up peaks one by one, moving from the mountain region's periphery toward its geographic center, measuring the heights of the conquered peaks with an altimeter and recording them. Another strategy may suggest climbing peak after peak, selecting at each step a nearby peak that looks higher than the one already conquered. These two strategies correspond to two different search algorithms.

Although NFL-1 per se is not related to any performance measures, it may be convenient to employ certain performance measures which, although not required by the NFL theorem, may facilitate the judgment about the efficacy of an algorithm. It is in principle unimportant how the data directly obtained in the search are mapped onto the performance measure. [38] The performance measure in a search for a pre-selected peak (a targeted search), may be, for example, chosen as the number of steps an algorithm needs to find the target. The algorithm that finds the target in fewer steps "performs" better. The performance measure in a non-targeted search which terminates after a pre-selected number of steps may be, for example, the maximum height reached after the pre-selected number of climbs. The algorithm that ends the search at a taller peak "performs" better.

NFL-1 per se has nothing to do with the choice of a performance measure. It considers the set of data obtained by the search algorithms and presented as a table listing the measured values of the fitness function in temporal order. This table is called a sample. NFL-1 relates to samples in probabilistic terms. It says that the probabilities of obtaining a certain sample by two different search algorithms are the same if these probabilities are averaged over all possible fitness landscapes.

The word averaged is crucial. NFL-1 asserts that no algorithm is better than any other algorithm if their results are averaged over all possible fitness functions. Thus, if a certain algorithm is better on a certain type of fitness landscapes, it is necessarily worse on some other types of fitness landscapes. Of critical importance, the NFL-1 theorem says nothing about any algorithms' advantages or shortcomings on specific fitness landscapes, where one algorithm may drastically outperform other algorithms at the cost of being inept on some other fitness landscapes.

Now look at how Dembski renders the gist of the NFL theorems: [39]

"A generic NFL theorem now takes the following form: It sets up a performance measure M that characterizes how effectively an evolutionary algorithm E locates a target T within m steps using information j."

This description is wrong in all of its parts. First, NFL-1 does not "set up a performance measure." No such quantity is mentioned in the theorem's proof, which is valid regardless of any performance measure. Second, NFL-1 does not relate to any targets. It is equally valid for algorithms searching for a target and for such algorithms that do not search for any pre-selected target. Third, NFL-1 has nothing to do with the any information j which resides outside the search space (this point will be discussed below in the section about the "displacement problem").

Dembski misuses the NFL theorem when he asserts that evolutionary algorithms cannot outperform blind search: "since blind search always constitutes a perfectly valid evolutionary algorithm, this means that the average performance of any evolutionary algorithm E is no better than blind search" [39] Since blind search is an extremely slow process, and no other algorithm can do better that blind search, then, according to Dembski, evolutionary algorithms cannot ensure the rate of evolution required by evolution theory, which entails random mutations plus natural selection.

This statement is misleading. Evolutionary algorithms indeed cannot outperform blind search but, as Dembski knows, only if their performance is averaged over all possible fitness functions. They can (and do) immensely outperform blind search on specific fitness landscapes, both in computer simulations and in the real biosphere. Dembski himself reviews examples of evolutionary algorithms immensely outperforming blind search (or random sampling) — like Richard Dawkins' "weasel algorithm," [40] a checkers-playing algorithm [41], and an antenna-designing algorithm [42] — but he forgets about them when wrongly claiming that NFL-1 prohibits biological evolution.

Dembski admits that evolutionary algorithms can outperform blind search if they are fine-tuned by a programmer. He does not believe, though, that natural genetic algorithms can be naturally fine-tuned to climb natural fitness landscapes. [43] This disbelief, although it is Dembski's prerogative, is not based on empirical or logical foundation but only on Dembski's philosophical/religious convictions. It is possible, though, to show that the fitness landscapes encountered in the real biosphere can often be fine-tuned to the available genetic algorithms based on mutations and natural selection. Let us discuss it (see the following section).

Displacement Problem

Chapter 4 in Dembski's No Free Lunch is devoted to NFL-1which allegedly proves the impossibility of Darwinian evolution. Dembski's thesis was strongly rebuffed by a number of critics, including the co-author of the NFL theorems, David Wolpert. [6] Confronted with critique, Dembski, without acknowledging his error, tried to make it look inconsequential. Since he could not make chapter 4 in his book disappear, he announced instead [44] that, contrary to the obviously triumphant appeal to the NFL theorems in his book, these theorems were not really crucial for his thesis but just a particular example of what he calls displacement problem (which, however, was in fact introduced in his book as a consequence of his interpretation of the NFL theorems).

Here is how Dembski defines the displacement problem: [45]

"...the problem of finding a given target has been displaced to the new problem of finding the information j capable of locating that target. Our original problem was finding a certain target within phase space. Our new problem is finding a certain j within the information-resource space J."

As noted earlier, the real problem is not necessarily "finding a given target" because search algorithms may work without being directed toward a "given target," and indeed, biological evolution is a process where there is no long-term target.

With his habitual inconsistency, Dembski in some instances writes that evolutionary algorithms are supposed to be "non teleological," that is, not directed to a target, and also that biological evolution is a process without a pre-selected target. In other instances, however, he states the opposite — that evolution is after all at least partially targeted. For example, "evolutionary algorithms are supposed to be capable of solving complex problems without invoking teleology." [46] But later, "An evolutionary algorithm is supposed to find a target within phase space." [46] Since a search for a target is necessarily teleological, these two statements are contradictory.

Furthermore, all Dembski's talk about information j sitting in an "information-resource space J" is irrelevant to real life problems. The NFL theorems are only valid for "black-box algorithms." This means that, before starting the search over the fitness landscape, the algorithm incorporates no knowledge whatsoever about the landscape. It probes the landscape one point at a time, gradually acquiring bits of information about the landscape. The fitness landscape is always a given: the algorithm has no choice of a fitness landscape but only explores an existing landscape it faces. Therefore the displacement problem is a phantom.

Imagine an organism existing in a certain environment. As a simplified example, let's say it is an animal that feeds on fruit growing on trees but has no tree-climbing skill. If the animal is too small, it has problems in reaching the fruit and its survival is uncertain. If it is too tall, it has problems in moving through the dense jungle and thus reaching more trees. Hence we might expect a certain optimal size for that animal, not too small and not too large, which provides the best chance of surviving and thus having progeny. The performance measure in this case can be, say, the number of descendants left by the animal, or the duration of the animal's life, or any other quantity that can be used to characterize the animal's chance for leaving more descendants. If we plot the dependence of this quantity on the animal's size, we will get a more-or-less bell-shaped curve with a peak of fitness at a certain optimal size. This graph is a simplified, two-dimensional model of a fitness landscape (which usually is multi-dimensional). This fitness landscape is determined by the environment. The animal has no choice of a fitness landscape — it is given. Search algorithms (sequences of events and actions resulting in exploration of the fitness landscape) that entail random mutations and natural selection fit in well with this fitness landscape: those mutations which result in the animal's size approaching the optimal value will be naturally selected to ensure the maximum fitness, that is, have the best chance of leaving progeny.

No excursion into the "information-resource space" imagined by Dembski is required, so his displacement problem is an abstract invention that does not exist in practice.

Conclusion

Dembski has either authored or edited at least eight books and numerous articles, essays, and Internet posts. He does not shy away from reproducing, often verbatim, the same passages time and time again, apparently striving for having his ideas disseminated as widely as possible, in every medium he can reach. On the other hand, encountering criticisms, he sometimes surreptitiously modifies his argument (without ever admitting error) so as to quietly slide out from the predicament caused by the criticism. [47]

Because of the large volume of Dembski's publications, my review has necessarily had to be cursory. I hope I have nevertheless shown that Dembski's so highly acclaimed achievements are just a nebulous dream; the real contents of his ideas and notions are in an inverse relation to the intensity of the praise heaped upon him by the ID crowd. If Dembski's work is the best the ID advocates have to show, then the entire ID enterprise is a political movement that wholly lacks scientific significance.

Comment: After this paper was submitted for publication, Dembski posted to the internet two essays allegedly providing "mathematical foundation of Intelligent Design." A brief critical discussion of these essay can be seen at www.talkreason.org/articles/math.cfm and www.talkreason.org/articles/newmath.cfm.

Acknowledgment.

I would like to thank Matt Young for helpful editorial advices.

Notes

[1] Dembski, William. 2004. "Biographical Sketch."

[2] http://www.designinference.com/biosketch.htm, accessed on December 12, 2004.

[3] Koons, Rob. 1999. Blurb on the dust cover of Dembski's Intelligent Design (see note 10c).

(a) Edis, Taner. 2002."Darwin in Mind: Intelligent Design Meets Artificial Intelligence." www.csicop.org/si/2001-03/intelligent-design.html, accessed on June 23.

(b) Eells, Ellery. 1999. "Review of The Design Inference by William A. Dembski." Philosophical Books 40, No 4.

(c) Elsberry, Wesley R. 1999. "Review of WA Dembski, The Design Inference," Talk Reason www.talkreason.org/articles/inference.cfm,accessed on August 14, 2003.

(d) Elsberry, Wesley R. and Jeffrey Shallit, 2003. "Information Theory, Evolutionary Computation, and Dembski's Complex Specified Information." Talk Reason. www.talkreason.org/articles/eandsdembski.pdf, accessed on April 29, 2004. 

(e) Fitelson, Branden, Christopher Stephens, and Elliott Sober. 1999."How Not to Detect Design—Critical Notice: William A. Dembski, The Design Inference." Philosophy of Science 66: 472–88.

(f) Godfrey-Smith, Peter. 2001. "Information and the Argument From Design." In R. T. Pennock, ed., Intelligent Design Creationism and Its Critics: Philosophical, Theological and Scientific Perspectives. Cambridge, MA: MIT Press: 575-596.

(g) Korthof, Gert. 2000. "On the Origin of Information by Means of Intelligent Design", in Was Darwin Wrong? http://home.planet.nl/~gkorthof/kortho44.htm, accessed on August 1, 2003. 

(h) Pennock, Robert T. 1999. Tower of Babel: The Evidence against the New Creationism. Cambridge: MIT Press.

(i) Orr, H Allen, 2002, "Review of No Free Lunch, by William Dembski." Boston Review,27, no. 3.

(j) Pigliucci, Massimo. 2001."Design Yes, Intelligent No: A Critique of Intelligent Design Theory and Neo-Creationism." Skeptical Inquirer 25, no. 5: 34–39.

(k) Ratzsch, Del. 2001.Nature, Design, and Science: The Status of Design in the Natural World. New York: State University of New York Press: 153-168.

(l) Rosenhouse, Jason. 2002. "Probability, Optimization Theory, and Evolution." Evolution, v. 56, No 8, 1721.

(m) Shallit, Jeffrey. 2003. "Review of Dembski's No Free Lunch." http://www.math.uwaterloo.ca/~shallit/nflr3.txt, accessed on August 7, 2003.

(n) Shallit, Jeffrey and Wesley R. Elsberry. 2004. "Playing Games With Probability: Dembski's 'Complex Specified Information." Chap. 9 of M. Young and T. Edis, eds.Why Intelligent Design Fails: Scientific Critique of the New Creationism. Brunswick, NJ: Rutgers University Press.

(o) Shanks,Niall. 2004. God, the Devil, and Darwin. New York: Oxford University Press.

(p) Stenger, Victor J. 2001. "Intelligent Design—The New Stealth Creationism," Talk Reason, www.talkreason.org/articles/Stealth.pdf., accessed on June 12, 2003.

(q) Tellgren, Erik. 2002. "On Dembski's Law of Conservation of Information." www.talkreason.org/articles/dembski_LCI.pdf. Accessed on April 28, 2004.

(r) Van Till, Howard J. 2003. "E coli at the No Free Lunch Room: Bacterial Flagella and Dembski's case for Intelligent Design." www.aaas.org/spp/dser/evolution/perspectives/vantillecoli.pdf, accessed on April 28, 2004.

(s) Wein, Richard. 2000. "Wrongly Inferred Design." www.talkreason.org/articles/wrongly.cfm, accessed on April 28, 2004.

(t) Wilkins, John S. and Wesley R. Elsberry. 2001. "The Advantages of Theft over Toil: The Design Inference and Arguing From Ignorance." Biology and Philosophy, 16: 711-724. 

(u)Young, Matt. (u1) 2001.  "Intelligent Design Is Neither," paper presented at the conference Science and Religion: Are They Compatible? Atlanta, Georgia, November 9-11, www.mines.edu/~mmyoung/DesnConf.pdf, accessed on April 28, 2004.

(u2) 2002. "How to Evolve Specified Complexity by Natural Means," www.pcts.org/journal/young2002a.html, accessed on April 28, 2004.

(u3)  2004. "Dembski's Explanatory Filter Delivers a False Positive," Panda's Thumb, http://www.pandasthumb.org/pt-archives/000166.html, posted April 22, 2004.

(And others.)

[4] Dembski, William. 2004. The Design Revolution: Answering the Toughest Questions about Intelligent Design, Downers Grove: InterVarsity Press.

[5] Wein, Richard. (a) 2002. "Not a Free Lunch but a Box of Chocolate," Talk Reason, www.talkreason.org/articles/choc_nfl.cfm, accessed on April 28, 2004.

(b) 2004. "The Designer-of-the-Gaps Revisited." In Talk Reason, www.talkreason.org/articles/Designer.cfm, accessed on April 28, 2004.

[6] Wolpert, David H. 2003."Dembski's Treatment of the NFL Theorems Is Written in Jello," in Talk Reason, www.talkreason.org/articles/jello.cfm, accessed on April 28, 2004.

[7] Dembski. 2002 (a) "Obsessively Criticized but Scarcely Refuted: A Response to Richard Wein." http://www.designinference.com/documents/05.02.resp_to_wein.htm. Accessed on April 29, 2004.

(b) "The Fantasy Life of Richard Wein: A Response to Response." www.designinference.com/documents/2002.6WeinsFantasy.htm.

[8] Perakh, Mark. (a) 2001."A Consistent Inconsistency." Talk Reason. www.talkreason.org/articles/dembski.cfm, accessed on April 28, 2004.

(b) 2002 "A Free Lunch in a Mousetrap." Talk Reason. www.talkreason.org/articles/dem_nfl.cfm, accessed on April 28, 2004.

(c) 2003 "A Presentation without Arguments: Dembski Disappoints." Skeptical Inquirer 26, no. 6: 31–34.

 (d) 2003. "The No Free Lunch Theorems and Their Application To Evolutionary Algorithms." In Talk Reason. www.talkreason.org/articles/orr.cfm, accessed on April 28, 2004.

(e) 2004. Unintelligent Design. Amherst, NY: Prometheus Books (chapter 1).

(f) 2004. "There Is a Free Lunch after All: William Dembski's Wrong Answers to Irrelevant Questions." Chapter 11 of M. Young and T. Edis, eds., Why Intelligent Design Fails: A Scientific Critique of the New Creationism, Brunswick, N. J.: Rutgers University Press.

(g)  2004. "The Design Revolution? How William Dembski Is Dodging Questions About Intelligent Design." In Talk Reason. www.talkreason.org/articles/Revolution.cfm, accessed on April28, 2004.

(h) Elsberry, Wesley, and Mark Perakh, 2004. "How Intelligent Design advocates turn the sordid lessons from Soviet and Nazi history upside down." Talk Reason.www.talkreason.org/articles/eandp.cfm, accessed on April 28, 2004.

(i) Perakh, Mark, and Matt Young. 2004. "Is Intelligent Design Science?" Chap. 13 of M. Young and T. Edis, eds., Why Intelligent Design Fails: A Scientific Critique of the New Creationism. Brunswick, N. J.: Rutgers University Press.

[9] Dembski, William. 2004. (No title). http://www.arn.org/ubb/ultimatebb.php?ubb=get_topic;f=13;t=001197. Accessed on March 13, 2004.

[10] Dembski, William A. 1998. (a) The Design Inference: Eliminating Chance through Small Probabilities. Cambridge: Cambridge University Press.

(b) 1999. "Redesigning Science." In W.A. Dembski, ed. Mere Creation: Science, Faith, and Intelligent Design. Downers Grove, Ill.: InterVarsity Press.

(c) 1999. Intelligent Design: The Bridge Between Science and Theology. Downers Grove, Ill.: InterVarsity Press. 

(d) 2000. "The Third Mode of Explanation: Detecting Evidence of Intelligent Design in the Sciences." In W.A.Dembski, M. J. Behe, and S. C. Meyer, eds., Science and Evidence for Design in the Universe. San Francisco: Ignatius Press.

(e) 2001. "What Intelligent Design Is Not." In W. A. Dembski and J. M. Kushiner, eds., Signs of Intelligence: Understanding Intelligent Design. Grand Rapids, Mich.: Brazos Press. 2002.

(f) 2002. No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence. Lanham, Md.: Rowman and Littlefield.

[11] Dembski, "What Intelligent Design Is Not."

 

 [12] Perakh, Unintelligent Design, chapter 1.

[13] Dembski, Intelligent Design, and No Free Lunch.

[14] Dembski, Intelligent Design, p. 159.

[15] Dembski, The Design Inference, pp. 151-154; No Free Lunch, pp. 115-118.

[16] Dembski, The Design Inference, pp. 73-75.

[17] Plantinga, Alvin. 2001. "Methodological Naturalism?" In R. T. Pennock, ed. Intelligent Design Creationism And Its Critics. Cambridge: MIT Press, 349-350.

[18] Dembski. "Redesigning Science," p.107.

[19] (a) Perakh: (a1) "A Consistent Inconsistency;"

(a2) Unintelligent Design.

(b).Young, "Dembski's Explanatory Filter Delivers a False Positive," and others.

[20] Ratzsch, Nature, Design, and Science. The example of a false positive produced by the EF given in this book (pp. 166-167) is a case of driving on a desert road whose left side was flanked by a long fence with a single small hole in it. A tumbleweed driven by wind happened to cross the road in front of Ratzsch's car and rolled precisely through the sole tiny hole. The event had an exceedingly small probability and was "specified" in Dembski's sense (exactly as a hit of a bull's-eye by an arrow in Dembski's favorite example). Dembski's EF leads to the conclusion that the event in question (tumbleweed rolling through the hole in the fence) was designed while it obviously was due to chance; this is a false positive. 

[21] Dembski. The Design Revolution.

[22] Dembski, "Redesigning Science," p. 111.

[23] Dembski. The Design Inference, p. 136.

[24] Dembski. The Design Inference, p. 17.

[25] Dembski. The Design Inference, pp. 114-115.

[26] (a) Nakaya, U. 1954. Snow Crystals: Natural and Artificial, Cambridge, MA: Harvard University Press.                                                                          

(b) Tape, W. 1994. Atmospheric Halos. Antarctic Research Series, v. 69. American Geophysical Union.

[27] Dembski, No Free Lunch, p. 12.

[28] Ibid, p. 127

[29] Gray, Robert M. 1991. Entropy and Information Theory.Berlin: Springer Verlag.

[30] Dembski. No Free Lunch, p. 166.

[31] Dembski. The Design Inference, p. 94.

[32] Dembski. No Free Lunch, p. 6.

[33] Ibid, pp. 159-160.

[34] Ibid, pp. 166-173.

[35] Ibid, p. 131.

[36] Landau, Lev D., and Evgeniy M. Lifshits. 1971. Statisticheskaya physika (Statistical physics). Moscow: Nauka; 40.

[37] Perakh. Unintelligent Design, pp. 34-35.

[38] Wolpert, David H and William G. Macready. 1997. "The No Free Lunch Theorems for Optimization." IEEE Trans. Evol. Comp. v.1, no 1, 67–82.

[39] Dembski, No Free Lunch, pp. 200-202.

[40] Dawkins, Richard. 1996. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe Without Design. New York: Norton.

[41] Chellapilla, Kumar and David B. Fogel. 1999. "Co-Evolving Checkers Playing Programs Using Only Win, Lose or Draw." SPIE's AeroSence '99: Applications and Science of Computational Intelligence II (Orlando, Fla.: 5–9 April 1999).

[42] Altshuler, Edward E., and Derek S. Linden. 1999. "Design of Wire Antennas Using Genetic Algorithms." In Y. Rahmat-Samii and E. Michielssen, eds., Electromagnetic Optimization by Genetic Algorithms (New York: Wiley): 211–248.

[43] Dembski. No Free Lunch, pp. 224-228.

[44] Dembski. 2002d. "Evolution's Logic of Credulity: An Unfettered Response to Allen Orr." Talk Reason, www.talkreason.org/articles/wheel.cfm. Accessed on December 12, 2004.


> <p>Mark Perakh is a native of Ukraine. He had for many years been a professor of physics at various universities in the USSR, as well as in three other countries. He came to the US in 1978 as a visiting scientist at the IBM research center, later joined the faculty of the California State University, and retired in 1994 as an emeritus.&nbsp; He has to his credit nearly 300 scientific papers, four books and a number of patents, and was awarded a number of prizes for his research including one from the Royal Society of London. He has also been active in the anti-creationismdebate (his recent book is <i>Unintelligent Design<i> -- Prometheus, 2004).

URL: http://www.talkreason.org/articles/Skeptic_paper.cfm
Akcje Dokumentu
« Grudzień 2024 »
Grudzień
PnWtŚrCzPtSbNd
1
2345678
9101112131415
16171819202122
23242526272829
3031