Showing posts with label design. Show all posts
Showing posts with label design. Show all posts

Friday, June 22, 2018

Language and specified complexity

Roughly speaking—but precisely enough for our purposes—Dembski’s criterion for the specified complexity of a system is that a ratio of two probabilities, pΦ/pL, is very low. Here, pL is the probability that by generating bits of a language L at random we will come up with a description of the system, while pΦ is the physical probability of the system arising. For instance, when you have the system of 100 coins all lying heads up, pΦ = (1/2)100 while pL is something like (1/27)9 (think of the description “all heads” generated by generating letters and spaces at random), something that pΦ/pL is something like 6 × 10−18. Thus, the coin system has specified complexity, and we have significant reason to look for a design-based explanation.

I’ve always been worried about the language-dependence of the criterion. Consider a binary sequence that intuitively lacks specified complexity, say this sequence generated by random.org:

  • 0111101001100111010101011001100111001110000110011110101101101101001011011000011101100111100111111111

But it is possible to have a language L where the word “xyz” means precisely the above binary sequence, and then relative to that language pL will be much, much bigger than 2−100 = pΦ.

However, I now wonder how much this actually matters. Suppose that L is the language that we actually speak. Then pL measures how “interesting” the system is relative to the interests of the one group of intelligent agents we know well—namely, ourselves. And interest relative to the one group of intelligent agents we know well is evidence of interest relative to intelligent agents in general. And when a system is interesting relative to intelligent agents but not probable physically, that seems to be evidence of design by intelligent agents.

Admittedly, the move from ourselves to intelligent agents in general is problematic. But we can perhaps just sacrifice a dozen orders of magnitude to the move—maybe the fact that something has an interest level pL = 10−10 to us is good evidence that it has an interest level at least 10−22 to intelligent agents in general. That means we need the pΦ/pL ratio to be smaller to infer design, but the criterion will still be useful: it will still point to design in the all-heads arrangement of coins, say.

Of course, all this makes the detection of design more problematic and messy. But there may still be something to it.

Thursday, March 13, 2014

Simplicity as a sign of design

It seems hard to deny that simplicity is a guide to truth in science. But the best account of the simplicity of a theory is brevity of expression in a language whose terms cleave nature at the joints. But that brevity of expression in a language is a guide to truth is a sign that a rational being is behind our universe.

Friday, April 5, 2013

Design, evolution and many worlds

The following image graphs the outcome of a simulation of a random process where a particle starts in the middle of the big square and moves by small steps in random directions until it reaches an edge.



It sure looks from the picture like there was a bias in the particle in favor of movement to the right, and that the particle was avoiding the black lines (you can see at a few points where it seems to be approaching the black lines and then jumps back) and searching for the red edge on the right.  If you saw the particle behaving in this way, you might even think that the particle has some rudimentary intelligence or is being guided.  To increase the impression of this, one could imagine this particle doing something like this through a complex labyrinth.

But in fact the picture shows a run that doesn't involve any such bias or intelligence or guidance.  However, it took 23774 runs until I got the one in the picture!  What I did is I had the computer repeatedly simulate random runs of a particle, throwing out any where where the particle hit the black boundary lines before it hit the red edge.  In other words, there is a bias at work  However, it is not a bias in the step-by-step movements of the particle, but a selection bias--to get the picture above, I had to discard 23773 pictures like this:



Sampling multiple cases with a selection bias can produce the illusion of design.  Most cases look like the second diagram, but if I only get to observe cases that meet the criteria of hitting the red edge before hitting any black edge, I get something that looks designed to meet the criteria (it looks like the process is run by biased chances, whereas the bias comes from conditioning on the criteria).

Now, suppose that evolutionary processes occur at a large number of sites--maybe a very large number of planets in a single large universe or maybe in a large number of universes.  Suppose, further, that intelligence is unlikely to evolve at any particle evolutionary site.  Maybe most sites only produce have unicellular critters or vast blankets of vegetation.  But a few produce intelligence.  We then will have a sampling bias in favor of the processes happening at sites where intelligence results.  And at such sites, the evolutionary processes will look like they have a forward-looking bias in favor of the production of intelligence, just as in my first diagram it looks like there is a bias in favor of getting to the red line and avoiding the black lines (think of the diagram as phase space, with the black lines marking where total extinction occurs and the red line marking where there is intelligent life).  

This means that we will have the appearance of design aimed at intelligence.  This forces a caution for both intelligent design theorists and evolutionary theorists if there is reason to think there is a large number of sites with life.  

The intelligent design theorists need to be very cautious about taking apparent end-directedness in the phylogenetic history that led to human beings to be evidence for design.  For given a large number of life sites and the anthropic principle, we expect to see apparent directedness at the production of intelligence in the process, just as in my first picture there is apparent red-directedness and black-avoidance.  This means that intelligent design theorists would do well to focus on apparent design in lineages that do not lead to us, since such design is not going to suffer from the same anthropic selection bias.  The cool stuff that spiders do is better fodder for intelligent design arguments than the mammalian eye, because the mammalian eye "benefited" from anthropic selection.  However, this also weakens the design arguments.  For design (pace some prominent intelligent design theorists) involves offering an explanation of a phenomenon in terms of the reasons a plausible candidate for a designer would likely be responsive to.  If the phenomenon is one that promotes the development of intelligent life, the design explanation could be quite good, for there are excellent reasons for a designer to produce intelligent life--intelligent life is objectively valuable.  But if the phenomenon is a spider's catching of flies, the reasons imputed to the designer become less compelling, and hence the design explanation becomes weakened.  

On the other hand, evolutionary theorists need to be careful in making precise generalizations about things like rates of beneficial mutations that apply equally to our ancestors and to the ancestors that other organisms have not in common with us.  For given a large number of sites where life develops, we would expect differences in such things due to the anthropic sampling bias.

This also suggests that we actually could in principle have evidence that decides between the hypotheses: (a) intelligent design, (b) naturalistic evolution at a small number of sites and (c) naturalistic evolution at a large number of sites.  

Suppose we find that the rate of favorable mutations among our ancestors was significantly higher than  the rate of favorable mutations not among our ancestors.  This offers support for (c), and maybe to a lesser degree for (a), but certainly against (b).  But suppose we find equality in the rates of favorable mutations among our ancestors and among our non-ancestors.  Then that offers evidence against (c) and depending on whether the rates are as we would expect on evolutionary grounds, it is evidence for (b) or for (a).  

I am assuming here that the number of sites is finite or there is some way of handling the issues with probabilities in infinite cases.

Friday, November 23, 2012

Prayer and Thomistic accounts of chance and design

On Thomistic accounts of chance and design, God micromanages the outcomes of chancy processes by means of primary causation, ensuring that the processes secondarily cause precisely the results that God wants. (Thomists often say a similar thing about free will, too.) On such accounts we can distinguish between two different ways that God can achieve a result, which I will call the miracle and natural methods. In the miracle method, God suspends the causal powers of the chancy process and directly cause the specific outcome he wants. If he does this on a die toss (I'll assume that die tosses are indeterministic), then the hand tosses the die, but somewhere there will be a break in the natural chain of causes. In the natural method, God causes the the causal powers of the chancy process to cause, in the way proper to them, the specific outcome he wants. Presumably, given that the natural method preserves the value of finite causes' activity, much of the time God providentially acts using the natural rather than the miracle method.

Now suppose that I am about to toss a die. And suppose that I pray, for all the right reasons (say, a good to a friend will result from non-six, and nobody will be harmed by it) and in the right way, that the die should show a non-six, while no one else prays that it should show a six. Moreover, suppose that God in fact does not have any significant counterbalancing reasons in favor of the die showing six. Let C be a complete description of the state of the world--including all the facts about the universe on which God's reasons are based--just before the die toss result. This seems a paradigmatic case for God to be moderately likely to exercise providential control. Moreover, let us suppose with the typical Thomist that almost all the time, excepting cases of particularly spectacular demonstrations, God exercises providential control by the natural method. Suppose then:

  • P(God wills non-six | C and no miracle) > 0.95.
And since God wills non-six if and only if non-six occurs on the Thomistic view:
  • P(non-six occurs | C and no miracle) > 0.95.

Suppose that in fact three occurs. It is then obviously correct to explain the non-six by adverting to the above 0.95 probability. The question of interest to me is this: Can we also explain the non-six by the fact that natural causes described in C, in isolation from the facts about prayer and the like, had a probability of 5/6 of producing a non-six?

Tuesday, November 20, 2012

Barr on chance and design

Stephen Barr has an article in First Things where he argues that there is no conflict at all between chance and divine design. The position seems to hinge on two claims:

  1. A series of events is chancy if and only if the secondary (i.e., finite, non-divine) causes of the events are independent of one another.
  2. God controls series of events by primary causation.
Given (1) and (2), there is no conflict between divine design, since (1) says nothing about dependencies among events induced by primary causation.

I wish Barr's account worked. I'd love for there to be a good account of the interplay of chance and design. But there are a number of serious problems with Barr's proposal.

I. The account of chanciness does not work in the case of a single event. Depending on how we read (1), a single event will either trivially count as chancy (since its cause is independent of the causes of all other events in the series, there being no others) or it will never count as chancy. But single events can be chancy (imagine a universe where there is only one quantum collapse happening) or non-chancy.

II. Chance is explanatory, both in gambling and in evolution. But independence of causes has no explanatory force--without probabilities or chances, it generates no useful statistic predictions or explanations (here is a very technical way to make the point). So Barr's account needs something more, something like objective tendencies of the secondary causes that give rise to probabilities. I will assume in some of the following criticisms that something like this has been added.

III. Suppose I go to the casino and I play the slot machine a thousand times, and each time win, due to independent secondary causes, because God so arranged it. It seems absurd to say that I won by chance, and yet on Barr's definition my winning is a matter of chance.

IV. The preceding case shows that it is difficult to see how one can make any probabilistic predictions about a chancy series of events. Consider an infinite sequence of coin tosses where the limiting frequency of heads is 1/2. God can just as easily make this infinite sequence of heads come out in the case of an ordinary fair coin as in the case of a coin heavily biased in favor of heads. When God controls series of events by primary causation—and as far as Barr's position goes, this could be always—it is not clear why we should expect frequencies to match the probabilities arising from the tendencies of secondary causes. The frequencies of events will be precisely what God needs them to be for his purposes. Why think his purposes match the probabilistic tendencies of secondary causes? Now it could be that God wills to ensure that the actual frequencies usually match the secondary causal tendencies, in order that the universe be simpler and more predictable. That is a reasonable hypothesis. But then the it seems that it isn't the secondary causal tendencies that are directly explanatorily of the observed frequencies, but rather the explanation of the observed tendencies is God's will. I.e., in the case of a fair coin, the reason the limiting frequency comes out as heads is because God willed to ensure that the limiting frequency match the secondary causal tendency of the coin. The secondary causal tendency of the coin is still explanatorily responsible for this outcome (because God willed to match the frequency to it), but it isn't causally responsible for this outcome (unless we take an occasionalist analysis of secondary causation).

V. In light of III and IV, no statistical prediction can be made from probabilistic facts about the causal tendencies of secondary causes without an implicit auxiliary hypothesis that God through primary causation willed a particular series of events whose statistical features match the stochastic features of the causes. This is not entirely special to Barr's account—-probably every theistic account requires an implicit auxiliary hypothesis that God works no miracle here. But in Barr's case there is a difference—it's not just a hypothesis that God works no miracle here, since in the case where God makes me win the slot machine a thousand times in a row, on Barr's view no miracle has occurred, just the ordinary chancy operation of secondary causes and God's primary causal oversight. So Barr's view needs two auxiliary hypotheses to generate empirical predictions from scientific data: a no-miracles hypothesis like in every theistic case and a hypothesis of stochastic-to-statistical match.

VI. Random processes need not involve independent causes. Take, for instance Markov chains or exchangeable sequences of random variables.

VII. The elliptical orbits of the planets in our solar system and of the planets in another solar system have independent causes—the gravitational influences of different bodies—and hence by Barr's criterion the two events are chancy. But it's not chance that the orbits are elliptical. Now maybe Barr will count these cases as not independent because they are governed by the same laws of nature. True, they are. But so are paradigmatically chancy events, like the results of successive quantum collapse experiments.

Wednesday, March 30, 2011

A gratitude/resentment argument

This argument is inspired by an argument of Kenneth Pearce.

  1. (Premise) It is sometimes appropriate to be grateful for or to the universe or to be resentful for or at the universe.
  2. (Premise) It is only appropriate to be grateful for or to A if A is an agent or an effect of an agent.
  3. (Premise) It is only appropriate to be resentful for or at A if A is an agent or an effect of an agent.
  4. Therefore, the universe is an agent or an effect of an agent.
  5. (Premise) If the universe is an agent or an effect of an agent, naturalism is false.
  6. Therefore, naturalism is false.

Wednesday, July 21, 2010

A defense (well, sort-of) of specified complexity as a guide to design

I will develop Dembski's specified complexity in a particular direction, which may or may not be exactly his, but which I think can be defended to a point.

Specified Complexity (SC) comes from the fact that there are three somewhat natural probability measures on physical arrangements. For definiteness, think of physical arrangements as black-and-white pixel patterns on a screen, and then there are 2n arrangements where n is the number of pixels.

There are three different fairly natural probability measures on this.

1. There is what one might call "a rearrangement (or Humean) measure" which assigns every arrangement equal probability. In the pixel case, that is 2-n.

2. There is "a nomic measure". Basically, the probability of an arrangement is the probability that, given the laws (and initial conditions? we're going to have two ways of doing it--one allowing the initial conditions to vary, and one to vary), such an arrangement would arise.

3. There is what one might call "a description measure". This is relative to a language L that can describe pixel arrangements. One way to generate a description measure is to begin by generating random finite-length strings of symbols from L supplemented with an "end of sentence" marker which, when generated, ends a string. Thus, the probability of a string of length k is m-k where m is the number of symbols in L (including the end of sentence marker). Take this probability measure and condition on (a) the string being grammatical and (b) describing a unique arrangement. The resulting conditional probability measure on the sentences of L that describe a unique arrangement then gives rise to a probability measure on the arrangements themselves: the description probability of an arrangement A is the (conditionalized as before) probability that a sentence of L describes A.

So, basically we have the less anthropocentric nomic and rearrangement measures, and the more anthropocentric description measure. The rearrangement measure has no biases. The nomic measure has a bias in favor of what the laws can produce. The description measure has a bias in favor of what can be more briefly described.

We can now define SC of two sorts. An arrangement A has specified rearrangement (respectively, nomic) complexity, relative to a language L, provided that A's rearrangement (respectively, nomic) measure is much smaller than its L-description measure. (There is some technical stuff to be done to extend this to less specific arrangements--the above works only for fully determinate arrangements.)

For instance, consider the arrangement where all the pixels are black. In a language L based on First Order Logic, there are some very short descriptions of this: "(x)(Bx)". So, the description measure of the all-black arrangement will be much bigger than the description measure of something messy that needs a description like "Bx1&Bx2&Wx3&...&Bxn". On the other hand, the rearrangement measure of the all-black arrangement is the same as that of any other arrangement. In this case, then, the L-description measure of the all-black arrangement will be much greater than its rearrangement measure, and so we will have specified rearrangement complexity, relative to L. Whether we will have nomic rearrangement complexity depends on the physics involved in the arrangement.

All of the above seems pretty rigorous, or capable of being made so.

Now, given the above, we have the philosophical question: Does SC give one reason to suppose agency? Here is where things get more hairy and less rigorous.

An initial problem: The concept of SC is language-relative. For any arrangement A, there is a language L1 relative to which A lacks complexity and a language L2 relative to which A has complexity. So SC had better be defined in terms of a privileged kind of language. I think this is a serious problem for the whole approach, but I do not know that it is insuperable. For instance, easily inter-translatable languages are probably going to give rise to similar orders of magnitude within the description measures. We might require that the language L be the language of a completed and well-developed physics. Or we might stipulate L to be some extension of FOL with the predicates corresponding to the perfectly normal properties. There are tough technical problems here, and I wish Dembski would do more here. Call any language that works well here "canonical".

Once we have this taken care of, it it can be done, we can ask: Is there any reason to think that SC is a mark of design?

Here, I think Dembski's intuition is something like this: Suppose I know nothing of an agent's ends. What can I say about the agent's intentions? Well, an agent's space of thoughts is going to be approximately similar to a canonical language (maybe in some cases it will constitute a canonical language). Without any information on the agent's ends, it is reasonable to estimate the probabilities of an agent having a particular intention in terms of the description measure relative to a canonical language.

But if this is right, then the approach has some hope of working, doesn't it? For suppose you have nomic specified complexity of an arrangement A relative to a canonical language. Then P(A|no agency) will be much smaller than the description measure of L, which is an approximation to P(A|agency) with no information about the sort of agency going on. Therefore, A incrementally confirms the agency hypothesis. The rest is a question of priors (which Dembski skirts by using absolute probability bounds).

I think the serious problems for this approach are:

  • The problem of canonical languages.
  • The problem that in the end we want this to apply even to supernatural designers who probably do not think linguistically. Why think that briefer descriptions are more likely to match their intentions?
  • We do have some information on the ends of agents in general--agents pursue what they take to be valuable. And the description measure does not take value into account. Still, insofar as there is value in simplicity, and the description measure favors briefer descriptions, the description measure captures something of value.

Saturday, July 17, 2010

A new argument against an undesigned very large multiverse

  1. In a very large (e.g., infinite) undesigned multiverse, there is nothing incongruous.
  2. There is something genuinely intrinsically funny in our universe.
  3. The genuinely intrinsically funny is incongruous.
  4. Therefore, our universe is not a part of a very large undesigned multiverse.
The point behind (1) is that when you have all these probabilistic resources, you don't have incongruity, no matter how weird the stuff in it is.

Terry Pratchett suggests that his Discworld is either the product of gods with a sense of humor, or just something in an infinite multiverse. If the latter, then according to (1) and (3), nothing happening in the Discworld is genuinely intrinsically funny: it is only funny relative to us, i.e., the incongruity is between that world and ours. I prefers the divine sense of humor hypothesis as an interpretation. (I am now reading Making Money. Lots of fun so far.)

Friday, July 16, 2010

A new argument for design

  1. (Premise) The platypus is genuinely funny.
  2. (Premise) The genuinely funny is incongruous.
  3. (Premise) The platypus is either a result of evolution or design (by a designer) or both.
  4. (Premise) If the platypus is a result of evolution without design (by a designer), it is not incongruous.
  5. Therefore, the platypus is a result of design (by a designer).
  6. Therefore, there is a designer (by a designer).
The most controversial premise is (4). But look—if evolution is running the platypus show, without a designer, behind everything there is randomness. But there is nothing incongruous in weird stuff that arose from randomness.

It is tempting to conclude from (6) that the designer has a sense of humor. But that takes extra steps. For there are two ways that a designer can produce something funny: by comic skill and by comic failure. So we would need a way of eliminating the latter possibility. I think to do that one would need a plausibility argument: the platypus requires great intelligence, and does not appear to be a failure.

Light-hearted as the above argument is, it points to an important feature of the debate between the naturalist and the theist. The theist can take as independent objective explananda features of the world that the naturalist has to eliminate away, reduce to the agent's subjectivity (e.g., the platypus being funny just means we find it funny; M 42 being beautiful just means we find it beautiful) or explain by means of a coincidence (e.g., such-and-such features were a selective advantage, and as it happens, it is a necessary truth that such-and-such features are funny or beautiful or whatever). The theist can take many of these humanly significant features of the world at face value, and then explain them as taken at face value.

[Minor changes made.-ARP]

Friday, April 23, 2010

Dembski and data compression

One of Dembski's approaches to determining whether a set of data is the result of design is whether it is compressible. Thus, the series of alleged dice throws 1111111111 is suspicious, while 4124262422 is not suspicious. One of Dembski's explanations is that the former can be easily compressed (e.g., with a run length encoding, say "1*10") while the latter cannot. McGrew offers the following objection: "We can tell instantly that novels and software code are the products of intelligent agency, though neither War and Peace nor Microsoft Word is algorithmically compressible." This is embarrassingly false. War and Peace and Microsoft Word are algorithmically compressible. For instance, take Microsoft Word:

$ wc -c WINWORD.EXE
12314456 WINWORD.EXE
$ bzip2 < WINWORD.EXE | wc -c
6386617
So, yes, Microsoft Word compressed by about a half. How about War and Peace?
$ wc -c WarAndPeace.txt
3288738 WarAndPeace.txt
$ bzip2 WarAndPeace.txt
$ wc -c WarAndPeace.txt.bz2
884546 WarAndPeace.txt.bz2
So, the compressed version is 27% of the original. Oops! Seems like Dembski's criterion works for Word and War and Peace.

However, we can easily make Dembski's criterion fail. I'll just do it with War and Peace because it's out of copyright.[note 1]

$ mv WarAndPeace.txt.bz2 WarAndPeace.compressed
$ bzip2 WarAndPeace.compressed
$ wc -c WarAndPeace.compressed.bz2
887810 WarAndPeace.compressed.bz2
So, when I try to compress the compressed version of War and Peace, I get a result that's 0.3% larger. In other words, the compressed version of War and Peace fails the Dembski criterion. Obviously, compression cannot always be iterated successfully, or we'd compress every finite text to nothing. But my WarAndPeace.compressed file is just as much the product of intelligent design as WarAndPeace.txt. In fact, it is the product of a greater amount of design: there is Tolstoy's authorship, and there is the Julian Seward's design of the bzip2 algorithm.

Now, could there be an algorithm that could compress my WarAndPeace.compressed file? No doubt. For instance, I could decompress it with bunzip and then apply a more efficient compression algorithm, like LZMA. However, there is a limit to this approach.

Tuesday, September 15, 2009

The argument from beauty

The argument from beauty, it seems to me, can come in four varieties, each asking a different "why" question, and each claiming that the best answer entails the existence of a being like God.

1. Why is there such a property as beauty?

This argument is the aesthetic parallel to the standard argument from morality. For it to work, a distinctively theistic answer to (1) must be offered. Parallel to a divine command metaethics, one could offer a divine appreciation meta-aesthetics. I think this gets the direction of explanation wrong—God appreciates beautiful things because they are beautiful. Moreover, if what God appreciates does not modally supervene on how non-divine things are, then divine simplicity will be violated. A better answer is that beautiful things are all things that reflect God in some particular respect, a respect that perhaps cannot be specified better than as that respect in which beautiful things reflect him (I think this is not a vicious circularity).

2. Why are there so many beautiful things?

The laws of physics, biology, etc. do not mention beauty. As far as these laws are concerned, beauty, if there is such a thing, is epiphenomenal. So, it does not seem that a scientific explanation of the existence of beautiful things can be given. But, perhaps, a philosophical account could be given of how, of metaphysical necessity, such-and-such physical states are always beautiful, and maybe then we can explain these entailing states physically. Or maybe one can show philosophically that, necessarily, most random configurations of matter include significant amounts of beauty, and then a statistical explanation can be given. But all that is pie in the sky, while a theistic explanation is right at hand.

3. How do we know that there is beauty?

This is parallel to my favorite argument from morality—the argument from moral epistemology. As far as naturalistic theories go, beauty (like morality) is causally inefficacious. As such, it appears difficult to see how we could have knowledge-conferring faculties that are responsive to beauty. The best story is probably going to be something like this. There is some complex of physical properties which correlates with being beautiful, and for some evolutionary reason, we have a faculty responsive to that complex of physical properties, and hence to beauty. However, the "and hence to beauty" is to be questioned. Evolutionary teleology is tied to fitness. The connection to beauty is fitness-irrelevant, because beauty is naturalistically causally inefficacious. It is at most that complex of physical properties that we are responsive to. But then it is not beauty as such that we are responsive to, that we perceive. Maybe, though, what we perceive is the most natural (in Lewis's sense) property among those that we could be reasonably be said to have states covarying with. But the physical correlates are, presumably, also quite natural since having states covarying with them is of evolutionary benefit. Moreover, I deny that the evolutionary teleology should snap to the most natural states, if the most natural ones are evolutionarily irrelevant. All in all, I do not think the prospects for a naturalistic account of our knowledge of beauty are good. But a theistic account is easy.

4. Why do we have aesthetic sensations?

This is an interesting question, but it strikes me as yielding an argument that is distinctly weaker than (3), unless it is just a different way of formulating an aspect of (3) (namely, the aspect of asking how our aesthetic beliefs get their intentionality). Still, the question is puzzling. We see such a very wide variety of things as beautiful: some people, most sunsets, many clouds, sme plants, most jellyfish, most tigers, most galaxies, some proofs, some musical compositions, some ideas, etc. It is odd that there would be an evolutionary benefit from being responsive to these things. The more likely naturalistic story is that this is some sort of a spandrel, maybe a spandrel of our recognition of good mate choices. Note that this story undercuts the attempt to evolutionarily ground our knowledge of beauty—it makes for us having aesthetic sensations but not aesthetic knowledge. That's a problem. But I am also not sure that the wide variety of things we sense as beautiful has enough in common for there to be a plausible story. However, that only yields a God of the gaps argument (not that there is anything intrinsically wrong with that).

Monday, September 14, 2009

Aquinas's design argument and evolution

St. Thomas's Fifth Way is:

We see that things which lack intelligence, such as natural bodies, act for an end, and this is evident from their acting always, or nearly always, in the same way, so as to obtain the best result. Hence it is plain that not fortuitously, but designedly, do they achieve their end. Now whatever lacks intelligence cannot move towards an end, unless it be directed by some being endowed with knowledge and intelligence; as the arrow is shot to its mark by the archer. Therefore some intelligent being exists by whom all natural things are directed to their end; and this being we call God.

A standard question about design arguments is whether they aren't undercut by the availability of evolutionary explanations. Paley's argument is often thought to be. But Aquinas' argument resists this. The reason is that Aquinas' arguments sets itself the task of explaining a phenomenon which evolutionary theory does not attempt to, and indeed which modern science cannot attempt to, explain. In this way, Aquinas' argument differs from Intelligent Design arguments which offer as their explananda features of nature (such as bacterial flagellae) which are in principle within the purview of science.

Aquinas' explanandum is: that non-intelligent beings uniformly act so as to achieve the best result. There are three parts to this explanandum: (a) uniformity (whether of the exceptionless or for-the-most-part variety), (b) purpose ("so as to achieve"), and (c) value ("the best result"). All of these go beyond the competency of science.

The question of why nature is uniform—why things obey regular laws—is clearly one beyond science. (Science posits laws which imply regularity. However, to answer the question of why there is regularity at all, one would need to explain the nature of the laws, a task for philosophy of science, not for science.)

Post-Aristotelian science does not consider purpose and value. In particular, it cannot explain either purpose or value. Evolutionary theory can explain how our ancestors developed eyes, and can explain this in terms of the contribution to fitness from the availabilty of visual information inputs. But in so doing, it does not explains why eyes are for seeing—that question of purpose goes beyond the science, though biologists in practice incautiously do talk of evolutionary "purposes". But these "purposes" are not purposes, as the failure of evolutionary reductions of teleological concepts show (and anyway the reductions themselves are not science, but philosophy of science). And even more clearly, evolutionary science may explain why we have detailed visual information inputs, but it does not explain why we have valuable visual information inputs.

Friday, May 8, 2009

Distinguishing multiple universes from design

One way to respond to certain design arguments, including both fine-tuning arguments and arguments from apparent biological design, is to make use of a multiple universe (MU) hypothesis. If there are enough universes (say, infinitely many), and there is the right kind of latitude in the random parameters, it is unsurprising that there be a world exhibiting just about kind of complexity you like, including intelligent life. And there is no further puzzle about why our universe exhibits this complexity, because there is a selection effect—only universes that have observers can be observed.

One might think that MU and Design hypotheses cannot be distinguished. That's why some design arguments are formulated with a disjunctive conclusion. But they can be distinguished: they have distinct predictions. The MU hypothesis basically says that we should expect to see just enough complexity and tuning needed to produce observers. The Design hypothesis makes it moderately probable that there would be more complexity and tuning because of the value of living beings that are non-observers. More genearlly, the difference is that the MU hypothesis involves only a tropism towards intelligence, while the Design hypothesis involves a tropism towards the instantiation of values. So in theory the two hypotheses could be distinguished.

Monday, March 16, 2009

More on evolutionary theories of mind

According to evolutionary theories of mind, that we have evolved under certain selective pressures not only causally explains our mental functioning, but in fact is essential to that very functioning. Thus, if an exact duplicate of one of us came into existence at random, with no selection, it would not have a mind. The reason is that components of minds have to have proper functions, and proper functions in us are to be analyzed through natural selection.

Of course, there could be critters whose proper function is to be analyzed in terms of artificial selection, or even in terms of design by an agent. But as it happens, we are not critters like that, says the evolutionary theorist of mind. Nonetheless, it is important that the account of proper function be sufficiently flexible that artificial selection would also be able to give rise to proper function (after all, how would one draw the line between artificial and natural selection, when artificial selectors—say, human breeders—are apt themselves to be a part of nature?). Moreover, typically, the evolutionary analysis of proper function is made flexible enough that agential design gives rise to proper function as well. The basic idea—which is more sophisticated in the newer accounts to avoid counterexample—is that it is a proper function of x to do A if and only if x-type entities tend to do A and x-type entities now exist in part because of having or having had this tendency. Thus, a horse's leg has running fast as one of its proper functions, because horse's legs do tend to run fast, and now exist in part because of having had this tendency. A guided missile has hitting the target as a proper function, because it tends to do that, and guided missiles exist in part because of having this tendency (if they didn't have this tendency, we wouldn't have made them).

Whatever the merits of these kinds of accounts of proper function, I think it is easy to see that such an account will not be satisfactory for philosophy of mind purposes. To see this, consider the following evolutionary scenario (a variant on one that the ancient atomists posited). Let w0 be the actual world. Now consider a world w1, where at at t0 there is one super-powerful alien agent, Patricia, and she has evolved in some way that will not concern us. Suddenly, at t0, a rich infinite variety of full-formed organisms comes into existence, completely at random, scattered throughout an infinity of planets. There are beings like dogheaded men, and beings like mammoths, and beings like modern humans, behaving just like normal humans. On the evolutionary theorist's view, these are all zombies. A minute later, at t1, Patricia instantaneously kills off all the organisms that don't match her selective criteria. Her selective criteria in w1 happen to be exactly the same ones that natural selection implemented in w0 by the year 2009. Poof, go the mammoth-like beings in w1, since natural selection killed them off by 2009 in w0. However, humanoids remain.

At t1, the survivors in w1 have proper functions according to the evolutionary theorist. Moreover, they have the exact same proper functions as their analogues in w0 do, since they were selected for on the basis of exactly the same selective principle. This was a case of artificial selection, granted, but still selection.

But it is absurd that a bunch of zombies would instantaneously become conscious simply because somebody killed off a whole bunch of other zombies. So the evolutionary account of proper function, as applied to the philosophy of mind, is absurd.

Maybe our evolutionary theorist will say: Well, they don't get proper functions immediately. Only the next generation gets them. Selection requires a generation to pass. However, she can only say this if she is willing to say that agency does not give rise to proper function. After all, agency may very well work by generating a lot of items, and then culling the ones that the agent does not want. Pace Plantinga, I do not think it is an absurd thing to say that agency does not give rise to proper function, but historically a lot of evolutionary accounts of proper function were crafted so as to allow for design-based proper functions. Moreover, it would seem absurd to suppose that a robot we directly made couldn't be intelligent at all but its immediate descendant could be.

I think the above shows that we shouldn't take agential design to generate proper function (at least not normally; maybe a supernatural agent could produce teleological facts, but that would be by doing something more than just designing in the way an engineer does), at least not if we want proper function to do something philosophically important for us. Nor do I think should we take evolution to generate proper function (my earlier post on this is particularly relevant here). Unless we are Aristotelians—taking proper function not to be reducible to non-teleological facts—we have no right to proper function. And thus if the philosophy of mind requires proper function, it requires Aristotelian metaphysics.