Showing posts with label regret. Show all posts
Showing posts with label regret. Show all posts

Tuesday, August 26, 2025

An immediate regret principle

Here’s a plausible immediate regret principle:

  1. It is irrational to make a decision such that learning that you’ve made this decision immediately makes it rational to regret that you didn’t make a different decision.

The regret principle gives an argument for two-boxing in Newcomb’s Paradox, since if you go for one box, as soon as you have made your decision to do that, you will regret you didn’t make the two-box decision—there is that clear box with money staring at you, but if you go for two boxes, you will have no regrets.

Interestingly, though, one can come up with predictor stories where one has regrets no matter what one chooses. Suppose there are two opaque boxes, A and B, and you can take either box but not both. A predictor put a thousand dollars in the box that they predicted you won’t take. Their prediction need not be very good—all we need for the story is that there is a better than even probability of their having predicted you choosing A conditionally on your choosing A and a better than even probability of their having predicted you choosing B conditionally on your choosing B. But now as soon as you’ve made your decision, and before you opened the chosen box, you will think the other box is more likely to have the money, and so your knowledge of your decision will make it rational to regret that decision. Note that while the original Newcomb problem is science-fictional, there is nothing particularly science-fictional about my story. It would not be surprising, for instance, if someone were able to guess with better than even chance of correctness about what their friends would choose.

Is this a counterexample to the immediate regret principle (1), or is this an argument that there are real rational dilemmas, cases where all options are irrational?

I am not sure, but I am inclined to think that it’s a counterexample to the regret principle.

Can we modify the immediate regret principle to save it? Maybe. How about this?

  1. No decision is such that learning that you’ve rationally made this decision immediately makes it rationally required to regret that you didn’t make a different decision.

On this regret principle, regret is compatible with non-irrational decision making but not with (known) rational decision making.

In my box story, it is neither rational nor irrational to choose A, and it is neither rational nor irrational to choose B. Then there is no contradiction to (2), since (2) only applies to decisions that are rationally made. And applying (2) to Newcomb’s Paradox no longer yields an argument for two-boxing, but only an argument that it is not rational to one-box. (For if it were rational to one-box, one could rationally decide to one-box, and one would then regret that.)

The “rationally” in (2) can be understood in a weaker way or a stronger way (the stronger way reads it as “out of rational requirement”). On either reading, (2) has some plausibility.

Thursday, April 20, 2023

Brownian motion and regret

Let Bt be a one-dimensional Brownian motion, i.e., Wiener process, with B0 = 0. Let’s say that time 0 you are offered, for free, a game where your payoff at time 1 will be B1. Since the expected value of a Brownian motion at any future time equals its current value, this game has zero value, so you are indifferent and go for it.

But here is a fun fact. With probability one, at infinitely many times t between 0 and 1 we will have Bt < 0 (this follows from Th. 27.24 here). At any such time, your expectation of your payout at B1 to be negative. Thus, at infinitely many times you will regret your decision to play the game.

Of course, by symmetry, with probability one, at infinitely many times between 0 and 1 we will have Bt > 0. Thus if you refuse to play, then at infinitely many times you will regret your decision not to play the game.

So we have a case where regret is basically inevitable.

That said, the story only works if causal finitism is false. So if one is convinced (I am not) that regret should always be avoidable, we have some evidence for causal finitism.

Wednesday, April 19, 2023

Avoiding regrets

I’ve recently been troubled by cases where you are sure to regret your decision, but the decision still seems reasonable. Some of these cases involve reasonable-seeming violations of expected utility maximization, but there is also the Cable Guy paradox, though admittedly I think I can probably exclude the Cable Guy paradox with causal finitism.

I shared Cable Guy with Clare Pruss, and she said that the principle of avoiding future regrets is false, and should be modified to a principle of avoiding final future regrets, because there are ordinary cases where you expect to regret something temporarily. For instance, you volunteer to do something onerous, and you expect that while volunteering, you will be regreting your choice, but you will be glad afterwards.

In all the cases that I’ve been interested in, while you are sure that there will be regret at some point in the future, you are not sure that there will be regret at the end (half the time the Cable Guy comes at the time you bet on him coming, after all).

Friday, April 14, 2023

Independence axiom

Here is an argument for the von Neumann – Morgenstern axiom of independence.

Consider these axioms for a preference structure on lotteries.

  1. If L ≺ M and K ≺ N, then pL + (1−p)K ≾ pM + (1−p)N.

  2. If M dominates L, then pL + (1−p)N ≾ pM + (1−p)N.

  3. If A ≾ pM + (1−p)N for all N′ dominating N, then A ≾ pM + (1−p)N.

  4. If L ≺ M, there is an M′ that dominates L but is such that M′ ≺ M.

  5. If M dominates L, then L ≺ M.

  6. Transitivity and completeness.

  7. 0 ⋅ L + 1 ⋅ N ∼ 1 ⋅ N + 0 ⋅ L ∼ N.

Now suppose that L ≺ M and 0 < p < 1. Let M dominate L but be such that M′ ≺ M by (4). Let N dominate N. Then pL + (1−p)N ≾ pM′ + (1−p)N by (2) and pM′ + (1−p)N ≾ pM + (1−p)N by (1). This is true for all N that dominates N, so pL + (1−p)N ≾ pM + (1−p)N by (3).

Now suppose that L ≾ M. Let M dominate M. Then L ≺ M by (5). By the above pL + (1−p)N ≾ pM′ + (1−p)N. This is true for all M′ dominating M, so by (3) we have pL + (1−p)N ≾ pM + (1−p)N. Hence we have independence for 0 < p < 1. And by (7) we get it for p = 0 and p = 1.

Enough mathematics. Now some philosophy. Can we say something in favor of the axioms? I think so. Axioms (5)–(7) are pretty standard fare. Axioms (3) and (4) are something like continuity axioms for the space of values. (I think axiom (4) actually follows from the other axioms.)

Axioms (1) and (2) are basically variants on independence. That’s where most of the philosophical work happens.

Axiom (2) is pretty plausible: it is a weak domination principle.

That leaves Axiom (1). I am thinking of it as a no-regret posit. For suppose the antecedent of (1) is true but the consequent is false, so by completeness pM + (1−p)N ≺ pL + (1−p)K. Suppose you chose pL + (1−p)K over pM + (1−p)N. Now imagine that the lottery is run in a step-wise fashion. First a coin that has probability p of heads is tossed to decide if the first (heads) or second (tails) option in the two complex lotteries is materialized, and then later M, N, L, K are resolved. If the coin is heads, then you now know you’re going to get L. But L ≺ M, so you regret your choice: it would have been much nicer to have gone for pM + (1−p)N. If the coin is tails, then you’re going to get K. But K ≺ N, so you regret your choice, too: it would have been much nicer to have gone for pM + (1−p)N. So you regret your initial choice no matter how the coin flip goes.

Moreover, if there are regrets, there is money to be made. Your opponent can offer to switch you to pM + (1−p)N for a small fee. And you’ll do it. So you have made a choice such that you will pay to undo it. That’s not rational.

So, we have good reason to accept Axiom (1).

This is a fairly convincing argument to me. A pity that the conclusion—the independence axiom—is false.