Showing posts with label reliabilism. Show all posts
Showing posts with label reliabilism. Show all posts

Thursday, April 13, 2023

Barn facades and random numbers

Suppose we have a long street with building slots officially numbered 0-999, but with the numbers not posted. At numbers 990–994 and 996–999, we have barn facades with no barn behind them. At all the other numbers, we have normal barns. You know all these facts.

I will assume that the barns are sufficiently widely spaced that you can’t tell by looking around where you are on the street.

Suppose if you find yourself at #5 and judge you are in front of a barn. Intuitively, you know you are in front of a barn. But if you find yourself at #995 and judge you are in front of a barn, you are right, but you don’t know it, as you are surrounded by mere barn facades nearby.

At least that’s the initial intuition (it’s a “safety” intuition in epistemology parlance). But note first that this intuition is based on an unstated assumption, that the buildings are numbered in order. Suppose, instead, that the building numbers were allocated by someone suffering from a numeral reversal disorder, so that, from east to west, the slots are:

  • 000, 100, 200, …, 900, 010, 110, 210, …, 999.

Then when you are at #995, your immediate neighborhood looks like:

  • 595, 695, 795, 895, 995, 006, 106, 206, 306.

And all these are perfectly normal barns. So it seems you know.

But why should knowledge depend on geometry? Why should it matter whether the numerals are apportioned east to west in standard order, or in the order going with the least-significant-digit-first reinterpretation?

Perhaps the intuition here is that when you are at a given number, you could “easily have been” a few buildings to the east or to the west, while it would have been “harder” for you to have been at one of the further away numbers. Thus, it matters whether you are geometrically surrounded by mere barn facades or not.

Let’s assume from now on that the buildings are arranged east to west in standard order: 000, 001, 002, …, 999, and you are at #995.

But how did you get there? Here is one possibility. A random number was uniformly chosen between 0 and 999, hidden from you, and you were randomly teleported to that number. In this case, is there a sense in which it was “easy” for you to have been assigned a neighboring number (say, #994)? That depends on details of the random selection. Here are four cases:

  1. A spinner with a thousand slots was spun.

  2. A ten-sided die (sides numbered 0-9) was rolled thrice, generating digits the digits in order from left to right.

  3. The same as the previous, except the digits were generated in order from right to left.

  4. A computer picked the random number by first accessing a source of randomness, such as the time, to the millisecond, at which the program was started (or timings of keystrokes or fine details of mouse movements). Then a mathematical transformation was applied to the initial random number, to generate a sequence of cryptographically secure pseudorandom numbers whose relationship to the initial source of randomness is quite complex, eventually yielding the selected number. The mathematical transformations are so designed that one cannot assume that when the inputs are close to each other, the outputs are as well.

In case 1, it is intuitively true that if you landed at #995, you could “easily have been” at 994 or 996, since a small perturbation in the input conditions (starting position of spinner and force applied) would have resulted in a small change in the output.

In case 2, you could “easily have been” at 990-994 or 996-999 instead of 995, since all of these would have simply required the last die roll to have been different. In case 3, it is tempting to say that you could easily have been at these neighboring numbers since that would have simply required the first die roll to have been different. But actually I think cases 2 and 3 are further apart than they initially seem. If the first die roll came out differently, likely rolls two and three would have been different as well. Why? Well, die rolls are sensitive to initial conditions (the height from which the die is dropped, the force with which it is thrown, the spin imparted, the initial position, etc.) If the initial conditions for the first roll were different for some reason, it is very likely that this would have disturbed the initial conditions for the second roll. And getting a different result for the first roll would have affected the roller’s psychological state, and that psychological state feeds in a complex way into the way they will do the second and third rolls. So in case 3, I don’t think we can say that you could “easily” have ended up at a neighboring number. That would have required the first die roll to be different, and then, likely, you would have ended up quite far off.

Finally, in case 4, a good pseudorandom number generator is so designed that the relationship between the initial source of randomness and the outputs is sufficiently messy that a slight change in the inputs is apt to lead to a large change in the outputs, so it is false that you could easily have ended up at a neighboring number—intuitively, had things been different, you wouldn’t have been any more likely to end up at 994 or 996 than at 123 or 378.

I think at this point we can’t hold on to the initial intuition that at #995 you don’t know you’re at a barn but at #5 you would have known without further qualifications about how you ended up where you are. Maybe if you ended up at #995 via the spinner and the left-to-right die rolls, you don’t know, but if you ended up there via the right-to-left die rolls or the cryptographically secure pseudorandom number generator, then there is no relevant difference between #995 and #5.

At this point, I think, the initial intuition should start getting destabilized. There is something rather counterintuitive about the idea that the details of the random number generation matter. Does it really matter for knowledge whether the buildin number you were transported to was generated right-to-left or left-to-right by die rolls?

Why not just say that you know in all the cases? In all the cases, you engage in simple statistical reasoning: of the 1000 barn facades, 999 of them are fronts of a real barns, one is a mere facade, and it’s random which one is in front of you, so it is reasonable to think that you are in front of a real barn. Why should the neighboring buildings matter at all?

Perhaps it is this. In your reasoning, you are assuming you’re not in the 990-999 neighborhood. For if you realized you were in that neighborhood, you wouldn’t conclude you’re in front of a barn. But this response seems off-base for two reasons. First, by the same token you could say that when you are at #5, you are assuming you’re not in front of any of the buildings from the following set: {990, 991, 992, 993, 994, 5, 996, 997, 998, 999}. For if you realized you were in front of a building from that set, you wouldn’t have thought you are in front of a barn. But that’s silly. Second, you aren’t assuming that you’re not in the 990-999 neighborhood. For if you were assuming that, then your confidence that you’re in front of a real barn would have been the same as your confidence that you’re not in the 990-999 neighborhood, namely 0.990. But in fact, your confidence that you’re in front of a real barn is slightly higher than that, it is 0.991. For your confidence that you’re in front of a real barn takes into account the possibility that you are at #995, and hence that you are in the 990-999 neighborhood.

Thursday, September 8, 2016

Evolutionary knowledge formation

Suppose a very large group of people who form their scientific beliefs at random. And then aliens kill everybody whose scientific beliefs aren't largely true. Surely the survivors don't know their scientific beliefs to be true, assuming they don't know about the aliens killing off those who were wrong. After all, they didn't know their beliefs prior to the massacre, and the massacre of the erring didn't give them knowledge. (This case may be a problem for certain kinds of reliabilism.)

Suppose now that we came to believe certain truths--whether mathematical or empirical or moral--"for evolutionary reasons". In other words, those who had these beliefs survived and reproduced, and those who didn't didn't. Let's even suppose that there is a tight connection between the truth of the beliefs and their fitness value. Nonetheless I am not sure that this story is sufficiently different from the story about aliens to make the beliefs into knowledge. The crucial difference, I guess, is that the story about the aliens is a single-generation story, while the evolutionary story is a multi-generation story. But I am not sure that matters at all. Suppose, for instance, that we modify the alien story to add a new generation who takes their scientific beliefs from the survivors of the first generation. Surely if you get your scientific beliefs from people who don't know, and no additional evidence is injected, you don't know either.

Monday, August 8, 2011

A reliabilist moral argument for the truth of some religious belief

Let P be the process of genetically or mimetically producing belief in non-empirical claims in order to enhance social cooperation.  Let moral realism be the claim that we know some non-trivial moral truths.  Consider this argument:
  1. (Premise) Evolutionary process P is the relevant process that produced both our religious and our moral beliefs.
  2. (Premise) If all religious beliefs are false, P is unreliable (since roughly half of the basic types of beliefs produced by P are then false).
  3. So, if all religious beliefs are false, then the relevant process that produced our moral beliefs is unreliable. (By 1 and 2)
  4. (Premise) Beliefs produced by an unreliable relevant process are not knowledge. (This is a consequence of reliabilism.)
  5. So, if all religious beliefs are false, we lack moral knowledge. (By 3 and 4 plus the analytic truth that knowledge requires belief.)
  6. (Premise) If moral realism is true, we have moral knowledge. 
  7. So, if all religious beliefs are false, moral realism is false. (By 5 and 6)
  8. So, if moral realism is true, some religious beliefs are true. (By 7)
And moral realism is true.

Now, I am not sure whether 1 is true.  But I think the main alternative to 1 is that God produced some of our religious or some of our moral beliefs or both.  And if that alternative is true, then some religious beliefs are true, namely those that say that God exists.  So my uncertainty about 1 does not harm the argument.  I am also not sure about the reliabilist premise 4, and that's more serious.


I think the naturalist reliabilist who wants to deny 8 will accept that P produced our religious and moral beliefs, but say that P is not the epistemologically relevant process.  The relevant process is, perhaps, the sub-type of P which is the genetic or mimetic production of beliefs in moral claims in order to enhance social cooperation.  I think this identification of the relevant sub-type is objectionably ad hoc.

Tuesday, September 7, 2010

Do I believe the multiplication table?

Most entries, like 8x8, in the multiplication table I know off the top of my head. But some may require a quick calculation. If you ask me what 7x8 is, I may do 49+7=56, and if you ask me what 6x9 is, I might do 70-6=54.

But is there really an important distiction here? You ask me what 8x4 is. Just about right away, 32 comes to mind. But what if, in fact, my mind (or brain?) unconsciously calculated it: 64/2=32? After all, I am more confident of my knowledge of 8x8 than of 8x4, and I think it comes to mind faster and more naturally. And even in my 7x8 calculation, there were unconscious elements. I don't have to consciously think: 7x8 = 7x(7+1) = 7x7 + 7x1 = 49+7 = 56. I just consciously think: 7x7=49, 49+7=56. So along with the conscious processing, there is unconscious application of the distributive law. And hence it is quite a reasonable hypothesis that when I am asked about 8x4, I might indeed be making a quick unconscious calculation. And that would help explain why I can call to mind 8x8 faster than 8x4. Furthermore, if the brain is at all like a computer and if the brain is where memories are housed, information is stored in some encoded and maybe even compressed form. There will thus always be a computational process of some sort when making stored data usable.

Actually, in the above I wasn't completely correct. I think I actually do have 7x8 and 6x9 memorized. But normally (though not now, since the examples are fresh in mind) recalling them from memory takes more time and effort, and I feel it is less reliable than doing the quick calculations. However, one could easily imagine that I don't have them memorized at all, and in the following I will counterfactually assume that.

Now, it is tempting to say that I don't believe that 7x8=56 if I have to actually compute it. But if computation is involved in almost all processes of recall, then it seems we believe very little at all, except the things we're occurrently thinking. And that's absurd. For one, it seems plausible that beliefs are needed for justification, and so if we have so few beliefs, and yet many of our beliefs require many other beliefs for their justification, then fewer of our beliefs end up justified than is right.

Perhaps, then, we should say that there is a difference between calling and recalling to mind. But that distinction is going to be hard to draw.

So maybe what we should say is this. When calling something to mind would involve an unconscious process, then we have a case of belief, but when the process would be conscious, there is no belief until the proposition is called to mind. Now the idiot savant who can do very big arithmetical calculations unconsciously counts as believing all the answers ahead of calculation. That doesn't seem intuitively right. However, whether we call it a belief or not, I do think we should not in any important way distinguish the case of unconscious arithmetical computation from more ordinary cases of recall. And once we realize that unconscious computation can be very complex, we really shouldn't distinguish conscious from unconscious computability in any normatively important way.

Here are some potential consequences. First, we might be pushed to some sort of reliabilism, perhaps of a proper function sort. For there ought to be a distinction between justified and unjustified belief, and if we do not distinguish belief from what one has a skill to compute, then we need a similar account of the justification of the outputs of that skill. But that account, very likely, will involve the reliability of the skill. Second, if we want to maintain some distinction between non-occurrent belief and skill at generating occurrent belief, this distinction is likely to be a vague one, involving the amount of computation. In particular, I suspect that the distinction may not match up with what we want to say about knowledge. So it may be that knowledge doesn't entail belief—maybe knowledge merely entails the possession of a skill of calling to mind.

I am not particularly attached to the conclusions. I just want to provoke some discussion about the phenomena.