Showing posts with label lotteries. Show all posts
Showing posts with label lotteries. Show all posts

Tuesday, September 24, 2024

Chanceability

Say that a function P : F → [0,1] where F is a σ-algebra of subsets of Ω is chanceable provided that it is metaphysically possible to have a concrete (physical or not) stochastic process with a state space of the same cardinality as Ω and such that P coincides with the chances of that process under some isomorphism between Ω and the state space.

Here are some hypotheses ones might consider:

  1. If P is chanceable, P is a finitely additive probability.

  2. If P is chanceable, P is a countably additive probability.

  3. If P is a finitely additive probability, P is chanceable.

  4. If P is a countably additive probability, P is chanceable.

  5. A product of chanceable countably additive probabilities is chanceable.

It would be nice if (2) and (4) were both true; or if (1) and (3) were.

I am inclined to think (5) is true, since if the Pi are chanceable, they could be implemented as chances of stochastic processes of causally isolated universes in a multiverse, and the result would have chances isomorphic to the product of the Pi.

I think (3) is true in the special case where Ω is finite.

I am skeptical of (4) (and hence of (3)). My skepticism comes from the following line of thought. Let Ω = ℵ1. Let F be the σ-algebra of countable and co-countable subsets (A is co-countable provided that Ω − A is countable). Define P(A) = 1 for the co-countable subsets and P(A) = 0 for the countable ones. This is a countably additive probability. Now let < be the ordinal ordering on 1. Then if P is chanceable, it can be used to yield paradoxes very similar to those of a countably infinite fair lottery.

For instance, consider a two-person game (this will require the product of P with itself to be chanceable, not just P; but I think (5) is true) where each player independently gets an ordinal according to a chancy isomorph of P, and the one who gets the larger ordinal wins a dollar. Then each player will think the probability that the other player has the bigger ordinal is 1, and will pay an arbitrarily high fee to swap ordinals with them!

Friday, September 14, 2018

A puzzle about knowledge in lottery cases

I am one of those philosophers who think that it is correct to say that I know I won’t win the lottery—assuming of course I won’t. Here is a puzzle about the view, though.

For reasons of exposition, I will formulate it in terms of dice and not lotteries.

The following is pretty uncontroversial:

  1. If a single die is rolled, I don’t know that it won’t be a six.

And those of us who think we know we won’t win the lottery will tend to accept:

  1. If ten dice are rolled, I know that they won’t all be sixes.

So, as I add more dice to setup, somewhere I cross a line from not knowing that they won’t all be six to knowing. It won’t matter for my puzzle whether the line is sharp or vague, nor where it lies. (I am inclined to think it may already lie at two dice but at the latest at three.)

Let N be the proposition that not all the dice are sixes.

Now, suppose that ten fair dice get rolled, and you announce to me the results of the rolls in some fixed order, say left to right: “Six. Six. Six. Six. Six. Six. Six. Six. Six. And five.”

When you have announced the first nine sixes, I don’t know N to be true. For at that point, N is true if and only if the remaining die is six, and by (1) I don’t know of a single die that it won’t be a six.

Here is what puzzles me. I want to know if in this scenario I knew N in the first place, prior to any announcements or rolls, as (2) says.

Here is a reason to doubt that I knew N in the first place. Vary the case by supposing I wasn’t paying attemption, so even after the ninth announcement, I haven’t noticed that you have been saying “Six” over and over. If I don’t know in the original scenario where I was paying attention, I think I don’t know in this case, either. For knowledge shouldn’t be a matter of accident. My being lucky enough not to pay attention, while it better positioned me with regard to the credence in N (which remained very high, instead of creeping down as the announcements were made), shouldn’t have resulted in knowledge.

But if I don’t know after the ninth unheard announcement, surely I also don’t know before any of the unheard announcements. For unheard announcements shouldn’t make any difference. But by the same token, in the original scenario, I don’t know N prior to any of the announcements. For it shouldn’t make any difference to whether I know at t0 whether I will be paying attention. When I am not paying attention, I have a justified true belief that N is true, but I am Gettiered. Further, there is no relevant epistemic difference between me before the die rolls and between the die rolls and the start of the announcements. If I don’t know N at the latter point, I don’t know N at the beginning.

So it seems that contrary to (2) I don’t know N in the first place.

Yet I am still strongly pulled to thinking that normally I would know that the dice won’t all be sixes. This suggests that whether I will know that the dice won’t all be sixes depends not only on whether it is true, but what the pattern of the dice will in fact be. If there will be nine sixes and one non-six, then I don’t N. But if it will be more “random looking” pattern, then I do know N. This makes me uncomfortable. It seems wrong to think the actual future pattern matters. Maybe it does. Anyway, all this raises an interesting question: What do Gettier cases look like in lottery situations?

I see four moves possible here:

A. Reject the move from not knowing in the case where you hear the nine announcements to not knowing in the case where you failed to hear the nine announcements.

B. Say you don’t know in lottery cases.

C. Embrace the discomfort and allow that in lottery cases whether I know I won’t win depends on how different the winning number is from mine.

D. Reject the concept of knowledge as having a useful epistemological role.

Of these, move B, unless combined with D, is the least plausible to me.

Wednesday, December 14, 2016

Knowledge of vague truths

Suppose that we know in lottery cases—i.e., if a lottery has enough tickets and one winner, then we know ahead of time that we won’t win. I know it’s fashionable to deny such knowledge, but such denial leads either to scepticism or to having to say things like “I agree that I have better evidence for p than for q, but I know q and I don’t know p” (after all, if a lottery has enough tickets, I can have better evidence that I won’t win than that I have two hands).

Suppose also that classical logic holds even in vagueness cases. This is now a mainstream assumption in the vagueness literature, I understand.

Finally, suppose that once the number of tickets in a lottery reaches about a thousand, I know I won’t win. (The example can be modified if a larger number is needed.) Now for each positive natural number n, let Tn be the proposition that a person whose height is n microns is tall but a person whose height is n−1 is not tall. At most one of the Tn propositions is true, since anybody taller than a tall person is tall, and anybody shorter than a non-tall person is short. Moreover, since there is a non-tall person and there is a tall person, classical logic requires that at least one of the Tn is true.

Hence, exactly one of the Tn is true. Now, some of the Tn are definitely false. For instance, T1000000 is definitely false (since someone a meter tall is definitely not tall) and T2000000 is definitely false (since someone a micron short of two meters tall is definitely tall). But if anything is vague, it will be vague where exactly the cut-off between non-tall and tall lies. And if that is vague, then in the vague area between non-tall and tall, it will be vague whether Tn is true. That vague area is at least a millimeter long (in fact, it’s probably at least five centimeters long), and since there are a thousand microns to the millimeter, there will be at least a thousand values n such that Tn is vague.

Moreover, these thousand Tn are pretty much epistemically on par. Let n be any number within that vague range, and suppose that in fact Tn is false. Then this is a lottery case with at least a thousand tickets. So, if in the lottery case I know I didn’t win, in this case I know that Tn is false.

Hence, some vague truths can be known—assuming that we know in lottery cases and that classical logic holds.

Of course, as usual, some philosophers will want to reverse the argument, and take this to be another argument that we don’t know in lottery cases, or that classical logic doesn’t hold, or that there is no vagueness.

Wednesday, May 6, 2015

Knowing you won't win the lottery

The following seems true:

  1. If you don't have enough evidence to know p, and your evidence for q is poorer than your evidence for p, then you don't have enough evidence to know q.
But if a lottery is large enough, then my probabilistic evidence that I won't win can be better than my evidence that I am now typing. I could be mistaken about being awake, after all. But I know I am typing, so I have enough evidence to know that I am typing. Hence, by (1), I would have enough evidence to know that I won't win the lottery. So in lottery cases, for large enough lotteries, on probabilistic grounds alone one has enough evidence to know that one won't win. But when one has enough evidence to know, and everything else goes right, then one knows. It would be very strange if the other things couldn't possibly go right. So one can know that one won't win the lottery, on probabilistic grounds alone.

Monday, December 30, 2013

Hope

If there are ten lottery tickets, and I hold one, I shouldn't hope to win, but I should simply assign probability 1/10 to my winning. Anything beyond the probabilities in the way of hope would be irrational. Likewise, if I have probability 9/10 of winning. Then I can have confidence, but this confidence should no more be a hope than in the former case. It's just a confidence of 9/10.

But if my friend has fallen morally many times but promises to do better, I shouldn't simply calculate the probability of his doing better using the best inductive logic and leave it at that. I should hope he will do better.

What makes for the difference? In the case of the friend, he should do better. But it is, of course, false that I should win the lottery. Indeed, the outcome of my winning the lottery is in no way normatively picked out. I can appropriately hope that the lottery will be run fairly, but that's that.

If this is right then it seems hope is of what should be. Well, that's not quite right. For if I have done something so terrible that my friend is under no obligation to forgive me, I can still hope for her supererogatory forgiveness. So, perhaps, hope is of what should be or what goes over and beyond a should.

If this is right, then this neatly dovetails with my account of trust or faith. Faith has as its proper object a present state of affairs that should be, such as a testifier's honesty and reliability, or perhaps—I now add—a present state of affairs that goes beyond a should. Hope has as its proper object a future state of affairs that should be or goes beyond a should. Both of these flow from love.

If this is right, then in order for there to be appropriate hope in things beyond human power—such as a hope that an asteroid won't wipe out all life on earth—there must be shoulds, or beyond-shoulds, that go beyond human power. This requires an Aristotelian teleology or theism.

Tuesday, November 8, 2011

Attitudes to risk and the law of large numbers

People do things that seem to be irrational in respect of maximizing expected utilities. For instance, art collectors buy insurance, even though it seems that the expected payoff of buying insurance is negative—or else the insurance company wouldn't be selling it (some cases of insurance can be handled by distinguishing utilities from dollar amounts, as I do here, but I am inclined to think luxury items like art are not a case like that). Likewise, people buy lottery tickets, and choose the "wrong" option in the Allais Paradox.

Now, there are all sorts of clever decision-theoretic ways of modeling these phenomena and coming up with variations on utility-maximization that handle them. But rather than doing that I want to say something else about these cases.

Why is it good to maximize expected utilities in our choices (and let's bracket all deontic constraints here—let's suppose that none of the choices are deontically significant)? Well, a standard and plausible justification involves the Law of Large Numbers [LLN] (I actually wonder if we shouldn't be using the Central Limit Theorem instead—that might even strengthen the point I am going to make). Suppose you choose between option A and option B in a large number of independent trials. Then, on moderate assumptions on A and B, the LLN applies and says that if the number of trials N is large, probably the payoff for choosing A each time will be relatively close to NE[A] and the payoff for choosing B each time will be relatively close to NE[B], where E[A] and E[B] are the expected utilities of A and B, respectively. And so if E[A]>E[B], you will probably do better in the long run by choosing A rather than by choosing B, and you can (on moderate assumptions on A and B, again) make the probability that you will do better by choosing A as high as you like by making the number of trials large.

But here's the thing. My earthly life is finite (and I have no idea how decision theory is going to apply in the next life). I am not going to have an infinite number of trials. So how well this LLN-based argument works depends on how fast the convergence of observed average payoff to the statistically expected payoff in the LLN is. If the convergence is too slow relative to the expected number of A/B-type choices in my life, the argument is irrelevant. But now here's the kicker. The rate of convergence in the LLN depends on the shape of the distributions of A and B, and does so in such a way that the lop-sided distributions involved in the problems mentioned in the first paragraph of the paper are going to give particularly slow convergence. In other words, the standard LLN-based argument for expected utility maximization applies poorly precisely to the sorts of cases where people don't go for expected utility maximization.

That said, I don't actually think this cuts it as a justification of people's attitudes towards things like lotteries and insurance. Here is why. Take the case of lotteries. With a small number of repetitions, the observed average payoff of playing the lottery will likely be rather smaller than the expected value of the payoff, because the expected value of the payoff depends on winning, and probably you won't win with a small number of repetitions. So taking into account the deviation from the LLN actually disfavors playing the lottery. The same goes for insurance and Allais: taking into account the deviation from the LLN should, if anything, tell against insuring and choosing the "wrong" gamble in Allais.

Maybe there is a more complex explanation--but not justification--here. Maybe people sense (consciously or not—there might be some evolutionary mechanism here) that these cases don't play nice with the LLN, and so they don't do expected utility maximization, but do something heuristic, and the heuristic fails.