Showing posts with label rationality. Show all posts
Showing posts with label rationality. Show all posts

Saturday, October 25, 2025

Spatiality and temporality

Here’s an interesting thing:

  1. Learning that our spatiality is an illusion need not radically change the pattern of our rational lives.

  2. Learning that our temporality is an illusion would necessarily radically change the pattern of our rational lives.

To see that (1) is true, note that finding out that Berkeley’s idealism is true need not radically change our lives. It would change various things in bioethics, but the basic structure of sociality, planning for the future, and the like could still remain.

On the other hand, if our temporality were an illusion, little of what we think of as rational would make sense.

Thus, temporality is more central to our lives than spatiality, important as the latter is. It is no surprise that one of the great works of philosophy is called Being and Time rather than Being and Space.

Curiously, though, even though temporality is more central to our lives than spatiality, temporality is also much more mysterious!

Wednesday, April 23, 2025

Sensory-based hacking and consent

Suppose human beings are deterministic systems.

Then quite likely there are many cases where the complex play of associations combined with a specific sensory input deterministically results in a behavior in a way where the connection to the input doesn’t make rational sense. Perhaps I offer you a business deal, and you are determined to accept the deal when I wear a specific color of shirt because that shirt unconsciously reminds you of an excellent and now deceased business partner you once had, while you have found the deal dubious if I wore any other color. Or, worse, I am determined to reject a deal offered by some person under some circumstances where the difference-maker is that the person is a member of a group I have an implicit and irrational bias against. Or perhaps I accept the deal precisely because I am well fed.

If this is true, then we are subject to sensory-based hacking: by manipulating our sensory inputs, we can be determined to engage in specific behaviors that we wouldn’t have engaged in were those sensory inputs somewhat different in a way that has no rational connection with the justification of the behavior.

Question: Suppose a person consents to something (e.g., a contract or a medical procedure) due to deliberate deterministic sensory-based hacking, but otherwise all the conditions for valid consent are satisfied. Is that consent valid?

It is tempting to answer in the negative. But if one answers in the negative, then quite a lot of our consent is in question. For even if we are not victims of deliberate sensory-based hacking, we are likely often impacted by random environmental sensory-based hacking—people around us wear certain colors of shirts or have certain shades of skin. So the question of whether determinism is true impacts first-order questions about the validity of our consents.

Perhaps we should distinguish three kinds of cases of consent. First, we have cases where one gives consent in a way that is rational given the reasons available to one. Second, we have cases where one gives consent in a way that is not rational but not irrational. Third, we have cases of irrational consent.

In cases where the consent is rational, perhaps it doesn’t matter much that we were subject to sensory-based hacking.

In cases where the consent is neither rational nor irrational, however, it seems that the consent may be undermined by the hacking.

In cases where the consent is irrational, one might worry that the irrationality undercuts validity of consent anyway. But that’s not in general true. It may be irrational to want to have a very painful surgery that extends one’s life by a day, but the consent is not invalidated by the irrationality. And in cases where one irrationally gives consent it seems even more plausible that sensory-based hacking undercuts the consent.

I wonder how much difference determinism makes to the above. I think it makes at least some difference.

Tuesday, January 14, 2025

More on the centrality of morality

I think we can imagine a species which have moral agency, but moral agency is a minor part of their flourishing. I assume wolves don’t have moral agency. But now imagine a species of canids that live much like wolves, but every couple of months get to make a very minor moral choice whether to inconvenience the pack in the slightest way—the rest is instinct. It seems to me that these canids are moral agents, but morality is a relatively minor part of their flourishing. The bulk of the flourishing of these canids would be the same as that of ordinary wolves.

Aristotle argued that the fact that rationality is how we differ from other species tells us that rationality is what is central to our flourishing. The above thought experiment shows that the argument is implausible. Our imaginary canids could, in fact, be the only rational species in the universe, and their moral agency or rationality (with Aristotle and Kant, I am inclined to equate the two) is the one thing that makes them different from other canids, but yet what is more important to their flourishing is what they have in common with other canids.

At the same time, it would be easy for an Aristotelian theorist to accommodate my canids. One needs to say that the form of a species defines what is central to the flourishing, and in my canids, unlike in humans, morality is not so central. And one can somehow observe this: rationality just is clearly important to the lives of humans in a way in which it’s not so much these canids.

In this way, I think, the Aristotelian may have a significant advantage over a Kantian. For a Kantian may have to prioritize rationality in all possible species.

In any case, we should not take it as a defining feature of morality that it is central to our flourishing.

One might wonder how this works in a theistic context. For humans, moral wrongdoing is also sin, an offense against a loving infinite Creator. As I’ve described the canids, they may have no concept of God and sin, and so moral wrongdoing isn’t seen as sin by them. Could you have a species which does have a concept of God and sin, but where morality (and hence sin) isn’t central to flourishing? Or does bringing God in automatically elevate morality to a higher plane? Anselm thought so. He might have been right. If so, then the discomfort that one is liable to feel at the idea of a species of moral agents where morality is not very important could be an inchoate grasp of the connection between God and morality.

Friday, May 17, 2024

Acting for the sake of rationality alone

Alice is confused about the nature of practical rationality and asks wrong philosopher about it. She is given this advice:

  1. For each of your options consider all the potential pleasures and pains for you that could result from the option. Quantify them on a single scale, multiply them by their probabilities, and add them up. Go for the option where the resulting number is biggest.

Some time later, Alice goes to a restaurant and follows the advice to the letter. After spending several hours pouring over the menu and performing back-of-the-envelope calculations she orders and eats the kale and salmon salad.

Traditional decision theory will try to explain Alice’s action in terms of ends and means. What is her end? The obvious guess is that it’s pleasure. But that need not be correct. Alice may not care at all about pleasure. She just cares about doing the action that maximizes the sum of pleasure quantities multiplied by their probabilities. She may not even know that this sum is an “expected value”. It’s just a formula, and she is simply relying on an expert’s opinion as to what formula to use. (If we want to, we could suppose the philosopher gives Alice a logically equivalent formula that was so complicated that she can’t tell that she is maximizing expected pleasure.)

I suppose the right end-means analysis of Alice’s action would be something like this:

  • End: Act rationally.

  • Means: Perform an action that maximizes the sum of products of pleasures and probabilities.

The means is constitutive rather than causal. In this case, there is no causal means that I can see. (Alice may have been misinformed by the same philosopher that there is no such thing as causation.)

The example thus shows that there can be cases of action where one’s aim is simply to act rationally, where one isn’t aiming at any other end. These may be defective cases, but they are nonetheless possible.

Tuesday, February 13, 2024

Playing to win in order to lose

Let’s say I have a friend who needs cheering up as she has had a lot of things not go her way. I know that she is definitely a better badminton player than I. So I propose a badminton match. My goal in doing so is to have her win the game, so as to cheer her up. But when I play, I will of course be playing to win. She may notice if I am not, plus in any case her victory will be the more satisfying the better my performance.

What is going on rationally? I am trying to win in order that she may win a closely contested game. In other words, I am pursuing two logically incompatible goals in the same course of action. Yet the story makes perfect rational sense: I achieve one end by pursuing an incompatible end.

The case is interesting in multiple ways. It is a direct counterexample to the plausible thesis that it is not rational to be simultaneously pursuing each of two logically incompatible goals. It’s not the only counterexample to that thesis. A perhaps more straightforward one is where you are pursuing a disjunction between two incompatible goods, and some actions are rationally justified by being means to each good. (E.g., imagine a more straightforward case where you reason: If I win, that’ll cheer me up, and if she wins, that’ll cheer her up, so either way someone gets cheered up, so let’s play.)

The case very vividly illustrates the distinction between:

  1. Instrumentally pursuing a goal, and

  2. Pursuing an instrumental goal.

My pursuit of victory is instrumental to cheering up my friend, but victory is not itself instrumental to my further goals. On the contrary, victory would be incompatible with my further goal. Again, this is not the only case like that. A case I’ve discussed multiple times is of follow-through in racquet sports, where after hitting the ball or shuttle, you intentionally continue moving the racquet, because the hit will be smoother if you intend to follow-through even though the continuation of movement has no physical effect on the ball or shuttle. You are instrumentally pursuing follow-through, but the follow-through is not instrumental.

Similarly, the case also shows that it is false that every end you have you either pursue for its own sake or it is your means to something else. For neither are you pursuing victory for its own sake nor is victory a means to something else—though your pursuit of victory is a means to something else.

Given the above remarks, here is an interesting ethics question. Is it permissible to pursue the death of an innocent person in order to save that innocent person’s life? The cases are, of course, going to be weird. For instance, your best friend Alice is a master fencer, and has been unjustly sentenced to death by a tyrant. The tyrant gives you one chance to save her life: you can fence Alice for ten minutes, with you having a sharpened sword and her having a foil with a safety tip, and you must sincerely try to kill her—the tyrant can tell if you are not trying to kill. If she survives the ten minutes, she goes free. If you fence Alice, the structure of your intention is just as in my badminton case: You are trying to kill Alice in order to save her life. Alice’s death would be pursued by you, but her death is not a means nor something pursued for its own sake.

If the story is set up as above, I think the answer is that, sadly, it is wrong for you to try to kill Alice, even though that is the only way to save her life.

All that said, I still wonder a bit. In the badminton case, are you really striving for victory? Or are you striving to act as if you were striving for victory? Maybe that is the better way to describe the case. If so, then this may be a counterexample to my main thesis here.

In any case, if there is a good chance the tyrant can’t tell the difference between your trying to kill Alice and your intentionally performing the same motions that you would be performing if you were trying to kill Alice, it seems to me that it might be permissible to do the latter. This puts a lot of pressure on some thoughts about the closeness problem for Double Effect. For it seems pretty plausible to me that it would be wrong for you to intentionally perform the same motions that you would be performing if you were trying to kill Alice in order to save people other than Alice.

Thursday, August 17, 2023

Tiebreakers

You need to lay off Alice or Bob, or else the company goes broke. For private reasons, you dislike Bob and want to see him suffer. What should you do?

The obvious answer is: choose randomly.

But suppose that there is no way to choose randomly. For instance, perhaps an annoying oracle which has told you the outcome of any process that you could have made use of random decision. The oracle says “If you flip the penny in your pocket, it will come up heads”, and now deciding that Alice is laid off on heads is tantamount to deciding that Alice is laid off.

So what should you do?

There seems to be something rationally and maybe morally perverse in one’s treatment of Alice if one fires her to avoid firing the person that one wants to fire.

But it seems that if one fires Bob, one does so in order to see him suffer, and that’s wrong.

I have two solutions, not mutually exclusive.

The first is that various rules of morality and rationality only make sense in certain normal conditions. Typical rules of rationality simply break down if one is in the unhappy circumstance of knowing that one’s ability to reason rationally is so severely impaired that there is no correlation between what seems rational and what is rational. Similarly, if one is brainwashed into having to kill someone, but is left with the freedom to choose the means, then one may end up virtuously beheading an innocent person if beheading is less painful than any other method of murder available, because the moral rules against murder presuppose that one has freedom of will. It could be that some of our moral rules also presuppose an ability to engage in random processes, and when that ability is missing, then the rules are no longer applicable. And since circumstances where random choices are possible are so normal, our moral intuitions are closely tied to these circumstances, and hence no answer to the question of what is the right thing to do is counterintuitive.

The second is that there is a special kind of reason, a tie-breaker reason. When one fires Bob with the fact that one wants to see him suffering being a tie-breaker, one is not intending to see him suffer. Perhaps what one is intending, instead, is a conditional: if one of Alice and Bob suffers, it’s Bob.

Monday, February 27, 2023

Species relativity of priors

  1. It would be irrational for us to assign a very high prior probability to the thesis that spiky teal fruit is a healthy food.

  2. If a species evolved to naturally assign a very high prior probability to the thesis that spiky teal fruit is a healthy food, it would not be irrational for them to do this.

  3. So, what prior probabilities are rational is species relative.

Monday, January 9, 2023

Means inappropriate to ends

Consider this thesis:

  1. You should adopt means appropriate to your ends.

The “should” here is that of instrumental rationality.

I am inclined to think (1) is false if by “end” is meant the end the agent actually adopts, as opposed to a natural end of the agent. If your ends are sufficiently irrational, adopting means appropriate to them may be less rational than adopting means inappropriate to them.

Suppose your end is irrationality. Is it really true that you should adopt the means to that, such as reasoning badly? Surely not! Instead, you should reject the end.

Instead of (1), what is likely true is:

  1. You should be such that you adopt means appropriate to your ends.

But what is wrong with being such that you adopt means inappropriate to your ends is not necessarily the means—it could be the ends.

Unjust laws have no normative force, and stupid ends have no normative force, either.

Saturday, December 17, 2022

Variation in priors and community epistemic goods

Here is a hypothesis:

  • It is epistemically better for the human community if human beings do not all have the same (ur-) priors.

This could well be true because differences in priors lead to a variety of lines of investigation, a greater need for effort in convincing others, and less danger of the community as a whole getting stuck in a local epistemic optimum. If this hypothesis is true, then we would have an interesting story about why it would be good for our community if a range of priors were rationally permissible.

Of course, that it would be good for the community if some norm of individual rationality obtained does not prove that the norm obtains.

Moreover, note that it is very plausible that what range of variation of priors is good for the community depends on the species of rational animal we are talking about. Rational apes like us are likely more epistemically cooperative than rational sharks would be, and so rational sharks would benefit less from variation of priors, since for them the good of the community would be closer to just the sum of the individual goods.

But does epistemic rationality care about what is good for the community?

I think it does. I have been trying to defend a natural law account of rationality on which just as our moral norms are given by what is natural for the will, our epistemic norms are given by what is natural for our intellect. And just as our will is the will of a particular kind of deliberative animal, so too our intellect is the intellect of a particular kind of investigative animal. And we expect a correlation between what a social animal’s nature impels it to do and what is good for the social animal’s community. Thus, we expect a degree of harmony between the norms of epistemic rationality—which on my view are imposed by the nature of the animal—and the good of the community.

At the same time, the harmony need not be perfect. Just as there may be times when the good of the community and the good of the individual conflict in respect of non-epistemic flourishig, there may be such conflict in epistemic flourishing.

I am grateful to Anna Judd for pointing me to a possible connection between permissivism and natural law epistemology.

Tuesday, August 23, 2022

Intending to lower the probability of one's success

It seems a paradigm of irrationality to intend an event E in an action A and yet take the action to lower the probability of E.

But it’s not irrational if my principle that intending a specification of something implies intending that which it is a specification of.

Suppose that Alice is in a bicycle race and is almost at the finish. If she just lets inertia do its job, she will inevitably win. But she carefully starts braking just short of the finish, aiming to cross the finish just a hair in front of Barbara, the cyclist behind her. She does this because she wants to make the race more exciting for the spectators, and she carefully calibrates her braking to make her win but not inevitably so.

Alice is aiming to win with a probability modestly short of one. This is a specification of winning, so by my principle, she is intending to win. But she is also, and in the very same action, aiming to decrease the probability of winning.

Friday, August 12, 2022

Two kinds of norms

On a natural law theory of morality, some moral facts are nature-relative, i.e., grounded in the particular kind of nature the particular rational being has, and other moral facts are structural, a part of the very structure of rationality, and will apply to any possible rational being.

Thus, the norm of monogamy for human beings (assuming, as I think, that there is one) is surely be nature-relative—it seems very likely that there could be rational animals to whom a different reproductive strategy would be natural. But what Aquinas calls the first precept of the natural law, namely that the good is to be pursued and the bad is to be avoided, is structural—it applies to any rational being.

I think that evidence for a human norm being nature-relative is that either the space of norms contains other very similar norms nearby or it’s vague which precise norm obtains. For instance, take the norm of respecting one’s parents. This norm implies that we should favor our parents over strangers in our beneficence. However, how much more should we favor our parents over strangers? If there is a precise answer to that question, then there will be other nearby norms—not human ones—that give a slightly different precise answer (requiring a greater or smaller degree of favoring). On the other hand, that the good is to be pursued does not seem to have very similar norms near it—it has an elegant simplicity that a variant norm like “The good is to be pursued except when it is an aesthetic good” does not have.

I used to think the norms involved in double-effect reasoning were structural and hence applied to all possible rational being. I am no longer confident of this. Take the case of pushing a large person in front of a trolley without their permission in order to stop the trolley from hitting five others. We can now imagine a continuum of cases depending on how thick the clothing worn by the large person is. If the clothing is sufficiently thick, the large person has an extremely small probability of being hurt. If the clothing is ordinary clothing, the large person is nearly certain to die. In between is a continuum of probabilities of death ranging from negligibly close to zero to negligibly close to one, and a continuum of probabilities of other forms of injury. It is wrong to push the large person if the chance of survival is one in a trillion. It is not wrong to push the large person if the chance of their being at all hurt is one in a trillion. Somewhere there is a transition between impermissibility and permissibility. Either that transition is vague or it’s sharp. If it’s sharp, then there are norms very similar to the one we have. If it’s vague, then it’s vague which of many very similar norms we have.

In either case, I think this is evidence that the relevant norm here is nature-relative rather than structural. If this is right, then even if it is wrong for us to push the large person in the paradigmatic case where death is, say, 99.99% certain, there could be rational beings for whom this is not wrong.

This leads to an interesting hypothesis about God’s ethics (somewhat similar to things Mark Murphy has considered):

  1. God is only subject to structural moral norms and does not have any nature-relative moral norms.

I do not endorse (1), but I think it is a hypothesis well worth considering.

Friday, September 24, 2021

Being subject to a Dutch Book

I’ve periodically wondered why doing poorly when faced with a Dutch Book is supposed to be a sign of irrationality, but it’s not a sign of irrationality that rational people do poorly when faced with someone who hits all and only rational people on the head with a baseball bat.

This occurred to me today:

  1. One cannot get a rational person to act against their own interest except by force, luck or superior information.

  2. Putting a Dutch Book over someone with inconsistent credences does not require force, luck or superior information.

This seems to get at some of the intuition as to why being subject to a Dutch Book is supposed to be a sign of irrationality.

But I don’t know how much confidence we should have in (1). The exception clause already admits three exceptions. This sounds ad hoc. Would we be very surprised if more exceptions had to be added?

Still, there is some plausibility to (1), at least for self-interested rationality.

Monday, November 30, 2020

Incompatible reasons for the same action

While writing an earlier post, I came across a curious phenomenon. It is, of course, quite familiar that we have incompatible reasons that we cannot act on all of: reasons of convenience often conflict with reasons of morality, say. This familiar incompatibility is due to the fact that the reasons support mutually incompatible actions. But what is really interesting is that there seem to be incompatible reasons for the same action.

The clearest cases involve probabilities. Let’s say that Alice has a grudge against Bob. Now consider an action that has a chance of bestowing an overall benefit on Bob and a chance of bestowing an overall harm on Bob. Alice can perform the action for the sake of the chance of overall harm out of some immoral motive opposed to Bob’s good, such as revenge, or she can perform the action for the sake of the chance of overall benefit out of some moral motive favoring Bob’s good. But it would make no sense to act on both kinds of reasons at once.

One might object as follows: The expected utility of the action, once both the chance of benefit and the chance of harm are taken into account, is either negative, neutral or positive. If it’s negative, only the harm-driven action makes sense; if it’s positive, only the benefit-driven action makes sense; if it’s neutral, neither makes sense. But this neglects the richness of possible rational attitudes to risk. Expected utilities are not the only rational way to make decisions. Moreover, the chances may be interval-valued in such a way that the expected utility is an interval that has both negative and positie components.

Another objection is that perhaps it is possible to act on both reasons at once. Alice could say to herself: “Either the good thing happens to Bob, which is objectively good, or the bad thing happens, or I am avenged, which is good for me.” Sometimes such disjunctive reasoning does make sense. Thus, one might play a game with a good friend and think happily: “Either I will win, which will be nice for me, or my friend will win, and that’ll be nice, too, since he’s my friend.” But the Alice case is different. The revenge reason depends on endorsing a negative attitude towards Bob, while one cannot do while seeking to benefit Bob.

Or suppose that Carl read in what he took to be holy text that God had something to say about Ï•ing, but Carl cannot remember if the text said that God commanded Ï•ing or that God forbade Ï•ing—it was one of the two. Carl thinks there is a 30% chance it was a prohibition and a 70% chance that it was a command. Carl can now Ï• out of a demonic hope to disobey God or he can Ï• because Ï•ing was likely commanded by God.

In the most compelling cases, one set of motives is wicked. I wonder if there are such cases where both sets of motives are morally upright. If there are such cases, and if they can occur for God, then we may have a serious problem for divine omnirationality which holds that God always acts for all the unexcluded reasons that favor an action.

One way to argue that such cases cannot occur for God is by arguing that the most compelling cases are all probabilistic, and that on the right view of divine providence, God never has to engage in probabilistic reasoning. But what if we think the right view of providence involves probabilistic reasoning?

We might then try to construct a morally upright version of the Alice case, by supposing that Alice is in a position of authority over Bob, and instead of being moved by revenge, she is moved to impose a harm on Bob for the sake of justice or to impose a good on him out of benevolent mercy. But now I think the case becomes less clearly one where the reasons are incompatible. It seems that Alice can reasonably say:

  1. Either justice will be served or mercy will be served, and I am happy with both.

I don’t exactly know why it is that (1) makes rational sense but the following does not:

  1. Either vengeance on Bob will be saved or kindness to Bob will be served, and I am happy with both.

But it does seem that (1) makes sense in a way in which (2) does not. Maybe the difference is this: to avenge requires setting one’s will against the other’s overall good; just punishment does not.

I conjecture that there are no morally upright cases of rationally incompatible reasons for the same action. That conjecture would provide an interesting formal constraint on rationality and morality.

Friday, September 11, 2020

Non-instrumental pursuits and uncaused causes

Here’s a curious fact: It is one thing to pursue something because it is a non-instrumental good and another to pursue it as a non-instrumental good, or to pursue it non-instrumentally. A rich eccentric might offer me $100 for pursuing some non-instrumental good. I might then do a Google Image search for “great art”, and spend a few seconds contemplating some painting. I would then be pursuing the good of contemplation because it is a non-instrumental good, but not as a non-instrumental good. (What if the eccentric offered to double the payment if I pursued the good non-instrumentally? My best bet would then be to just forget all about the offer and hope I end up pursuing some good non-instrumentally anyway.)

Thinking about the above suggests an important thesis: To pursue a good non-instrumentally is something positive, not merely the denial of instrumentality. Simply cutting out of the world the story about the rich eccentric and keeping my contemplation in place does not make the contemplation be pursued as a non-instrumental good. Rather, such world surgery makes the contemplation non-rational. To make the contemplation a non-instrumental pursuit of a good requires that I add something—a focus on that good in itself. We don’t get non-instrumental pursuit by simply scratching out the instrumentality, just as we don’t get an uncaused cause by just deleting its cause—rather, an uncaused cause is a cause of a different sort, and a non-instrumental pursuit is a pursuit of a different sort.

Thursday, April 23, 2020

The pursuit of perfection and the great chain of being

Consider the following two plausible Aristotelian theses:

  1. A substance naturally pursues each of its own perfections.

  2. Every natural activity of a substance is a perfection of it.

This threatens an infinite regress of pursuits. Reproduction is a perfection of an oak tree. So by 1, the oak naturally pursues reproduction. But by 2, this natural pursuit of reproduction is itself a perfection of the oak. So, by 1, the oak naturally pursues the pursuit of reproduction. And so on, ad infinitum.

So, 1 and 2, though plausible, are problematic. I suggest that we reject 1. Perhaps the oak tree pursues reproduction but does not pursue the pursuit of reproduction. Or perhaps it pursues the pursuit of reproduction, but doesn’t pursue the pursuit of the pursuit of reproduction. How many levels of pursuit are found in the substance is likely to differ from substance to substance: it is one of those things that the substance’s form determines.

We might say that there are more levels of pursuit in a more sophisticated substance. Thus, perhaps, non-living things only have first order pursuits. To use Aristotle’s physics as an example, the stone pursues being in the center of the universe. But the stone does not pursue the pursuit of being in the center of the universe. But in living things, there are multiple levels. The oak tree grows reproductive organs with which it will pursue reproduction, and in growing the organs it pursues the pursuit of reproduction.

Here is an intriguing hypothesis: in human beings, 1 and 2 are both true. Thus there is thus a kind of (potential?) infinity at the heart of our pursuits. For we are capable of forming a mental conception of our perfection as such, which enables us to pursue our perfections as perfections. If an angel offers a dog food, the dog will take it, since it can conceive of food, and thereby become perfected. But even an angel cannot offer a dog perfection as such, since the dog cannot conceive of a perfection as such. However, we can: if an angel says: “If you ask for it, I will make you perfect in some respect or other, without any loss of perfection in any other respect”, that’s a deal we can understand, and it is a deal that is attractive to us, because we pursue perfection as such.

If the above is right, then we have a kind of deep teleological differentiation between three levels of being:

  1. Non-living substances pursue first order perfections only.

  2. Living substances have at least one meta-level of pursuit: they pursue the pursuit of some or all of their first order perfections.

  3. Rational substances have infinitely many meta-levels of pursuit, at least potentially.

Tuesday, September 17, 2019

A gambling puzzle about nonmeasurable events

I have two sealed envelopes, labeled A and B. One contains $3 and the other nothing. You don’t know which is which. I am willing to sell either or both envelopes for $1 each. You have a fixed period of time to inform me whether you are buying neither, both, A, or B, after which time you pay any get to open any envelopes you bought.

Obviously, it makes sense for you to hand me $2 and buy both envelopes and profit by a dollar.

But suppose now that I tell you that I chose which envelope to put the $3 using a saturated nonmeasurable method. For instance, perhaps I chose a subset N of the points on the circumference of a spinner that has the properties that:

  1. N is nonmeasurable,

  2. the only measurable subsets of N have measure zero, and

  3. the only measurable subsets of the complement of N have measure zero,

then I spun the spinner, and if the spinner landed in N, I put the $3 in envelope A, and otherwise in B.

Your purchase options are: Neither, Both, A and B. The probability that the $3 is in A is completely undefined (we should represent the probability as the full interval from 0 to 1) and the probability that the $3 is in B is completely undefined.

It seems then:

  1. It’s clearly rationally permissible for you to go for Both.

  2. Going for A is neither rationally better nor rationally worse than going for Both. For by going for A, you miss out on B. But the expected utility of purchasing B iscompletely undefined: it is a choice to pay $1 for a gamble that has a completely undefined probability of paying out $3. So, it is completely undefined whether Both is better than A or worse. If Both is permissible, so is A, then.

  3. But by similar reasoning it is completely undefined whether going for Neither is better than or worse than going for A. For the expected payoff of A is completely undefined. So, if A is rationally permissible, so is Neither, then.

  4. Swapping A and B in the reasoning in (2) shows that B is rationally permissible as well.

So now it seems that all four options are equally permissible. But something has gone wrong here: Clearly, Both beats Neither, and it’s irrational to go for Neither.

I think to get out of the above puzzle, we have to deny the initially plausible principle:

  1. If an option is rationally permissible, and another option is neither better nor worse than it, then the latter is also permissible.

Here is another case where this principle needs to be denied. You have a choice between playing Pac Man, or eating one scoop of ice cream, or eating two. Playing Pac Man is neither better nor worse than either one or two scoops of ice cream. Two scoops of ice cream is better than one. It is clearly rationally permissible to play Pac Man. By (5), it’s permissible to eat one scoop of ice cream, then. But that’s not true, since two scoops beats one.

So, let’s deny (5). Now I think the reasonable thing to say is that Neither is irrational, but each of Both, A and B is rationally permissible. But there is still a puzzle in the vicinity. Suppose you are asked about your purchases envelope-by-envelope. First you’re offered the chance to buy A, and then a chance to buy B, and once a deal is declined, it’s gone. You have no rational obligation to buy A. After all, going for B alone is permissible. So, let’s say you decline A. Next you’re asked about B. At this point, A is out of the picture, and the question is whether to pay $1 for a completely undefined probability of getting $3. It’s permissible to decline that. So, you can permissibly decline B as well. So, let’s say you do so. Now by a pair of perfectly rational choices you ended up “doing something stupid”. This is a bit like Satan’s Apple. but with a finite number of choices.

The puzzle above seems familiar. I may have read it somewhere and it stuck in my subconscious.

Wednesday, August 28, 2019

Dutch Books and update rationality

It is often said that if you depart from correct Bayesian update, you are subject to a diachronic Dutch Book—a sequence of bets you will have to rationally agree to that is sure to make you lose—and this is supposed to indicate a lack of rationality. That may be, but I want to point out that the lack of rationality is not constituted by being subject to a Dutch Book: being subject to a Dutch Book is merely a symptom. I expect most people working this stuff know this, but perhaps it’s worth giving an explicit argument for.

Here is why. Alice, Bob and Carl are observing a coin that is either double-headed (D) or fair (F). Their prior probabilities for the two hypotheses are 1/2, and they have the reasonable and consistent priors: they assign probability 3/4 to heads showing up, and so on. The coin is flipped and the result is observed. If the coin lands tails, all three correctly update their probability for D to 0. If the coin lands lands heads, Alice, Bob and Carl each follow a different rule for updating their credence for D. Alice updates to 2/3 in accordance with Bayes’ theorem. Bob updates to 3/4 as that intuitively seems right to him. Carl, on the other hand, initiates a process in his brain which randomly updates to a uniformly chosen credence between 1/2 and 1.

Alice is not subject to a Dutch Book.

Bob is.

But Carl, once again, is not. [Proof: For any betting book, there is a non-zero chance that Carl would be rationally permitted to respond to that book in a way that it would be rationally permitted for Alice to respond. For Carl and Alice differ in their credences only in post-toss bets dependent on D in the special case that the first toss is heads, but the direction in which they differ in their credences is random: Carl has a non-zero chance of having a lower credence than Alice in D at this point and a non-zero chance of having a higher one. If at Alice’s credence of 2/3 the bet is rationally permitted to take, then either (a) for all credences lower than 2/3 it is rationally permitted to take, or (b) for all credences higher than 2/3 it is permitted to take, since the expected outcomes are linear functions of the credence. But there is a non-zero chance that Carl’s credence is lower than Alice’s and a non-zero chance that Carl’s credence is higher than Alice. Thus, there is a non-zero chance that Carl can permissibly take the bet, if Alice can permissibly take the bet. And the same argument applies if Alice can permissibly refuse the bet.]

However, Carl is not more rational than Bob, despite not being subject to a Dutch Book due to his unpredictability. Hence, not being subject to a Dutch Book is only a symptom of irrationality, not constitutive of it.

Saturday, July 27, 2019

The Trinity, sexual ethics and liberal Christianity

Many Christians deny traditional Christian doctrines regarding sexual ethics while accepting traditional Christian Trinitarian doctrine. This seems to me to be a rationally suspect combination because:

  1. The arguments against traditional Christian sexual ethics are weaker than the arguments against the doctrine of the Trinity.

  2. A number of the controversial parts of traditional Christian sexual ethics are grounded
    at least as well in Tradition and Scripture as the doctrine of the Trinity is.

Let me offer some backing for claims 1 and 2.

The strongest arguments against traditional Christian sexual ethics are primarily critiques of the arguments for traditional Christian sexual ethics (such as the arguments from the natural law tradition). As such, these arguments do not establish the falsity of traditional Christian sexual ethics, but at best show that it has a weak philosophical foundation. On the other hand, the best arguments against the doctrine of the Trinity come very close to showing that the doctrine of the Trinity taken on its own terms is logically contradictory. The typical Christian theologian is the one who is on the defensive here, offering ways to resolve the apparent contradiction rather than giving rational arguments for the truth of the doctrine.

There are, admittedly, some arguments against traditional Christian sexual ethics on the basis of intuitions widely shared in our society. But we know that these intuitions are very much shaped by a changing culture, insofar as prior to the 20th century, one could run intuition-based arguments for opposite conclusions. Hence, we should not consider the arguments based on current social intuitions to be particularly strong.
But the intuition that there is something contradictory about the doctrine of the Trinity does not seem to be as dependent on changing social intuitions. The merely socially counterintuitive is rationally preferable to the apparently contradictory.

Neither the whole of the doctrine of the Trinity nor the whole of traditional Christian sexual ethics is explicit in Scripture. But particularly controversial portions of each are explicit in Scripture: the Prologue of John tells us that Christ is God, while both Mark and Luke tell us that remarriage after divorce is a form of adultery, and Paul is clear on the wrongfulness of same-sex sexual activity. And the early Christian tradition is at least as clear, and probably more so, sexual ethics as on the doctrine of the Trinity.

I am not saying, of course, that it is not rational accept the doctrine of the Trinity. I think the arguments against the doctrine have successful responses. All I am saying is that traditional Christian sexual ethics fares (even) better.

Tuesday, November 27, 2018

Evil, omniscience, and other matters

If God exists, there are many evils that God doesn’t prevent, even though it seems that we would have been obligated to prevent them if we could.

A sceptical theist move is that God knows something about the situations that we don’t. For instance, it may seem to us that the evil is pointless, but God sees it as interwoven with greater goods.

An interesting response to this is that even if we knew about the greater goods, we would be obligated to prevent the evil. Say, Carl sees Alice about to torture Bob, and Carl somehow knows (maybe God told him) that one day Alice will repent of the evil in response to a beautiful offer of forgiveness from Bob. Then I am inclined to think Carl should still prevent Alice from torturing Bob, even if repentance and forgiveness are goods so great that it would have been better for both Alice and Bob if the torture happened.

Here is an interesting sceptical theist response to this response. Normally, we don’t know the future well enough to know that great goods would arise from our permitting an evil. Because of this, our moral obligations to prevent grave evils have a bias in them towards what is causally closer to us. Moreover, this bias in the obligations, although it is explained by the fact that normally we don’t know the future very well, is present even in the exceptional cases where we do know the future sufficiently well, as in the Carl, Alice and Bob case.

This move requires an ethical system where a moral rule that applies in all circumstances can be explained by its usefulness in normal circumstances. Rule utilitarianism is of course such an ethical system. Divine command theory is as well: God can be motivated to issue an exceptionless rule because of the fact that normally the rule is a good one and it might not be good for us to be trying to figure out whether a case at hand is an exception to the rule (this is something I learned from Steve Evans). And St. Thomas Aquinas in his argument against nonmarital sex holds that natural law is also like that (he argues that typically nonmarital sex is bad for the offspring, and concludes that it is wrong even in the exceptional cases where it’s not bad for the offspring, because, as he says, laws are made with regard to the typical case).

Historically, this approach tends to be used to derive or explain deontic prohibitions (e.g., Aquinas’ prohibition on nonmarital sex). But the move from typical beneficiality of a rule to its holding always does not require that the rule be a deontic prohibition. A rule that weights nearer causal consequences more heavily could just as easily be justified in such a way, even if the rule did not amount to a deontic prohibition.

Similarly, one might use typical facts about our relationships with those closer to us—that we know what is good for them better than for strangers, that they are more likely to accept our help, that the material benefits of our help enhance the relationship—to explain why helping those closer to us should be more heavily weighted in our moral calculus than helping strangers, even in those cases where the the typical facts do not obtain. Once again, this isn’t a deontic case.

One might even have such typical-case-justified rules in prudential reasoning (perhaps a bias towards the nearer future is not irrational after all) and maybe even in theoretical reasoning (perhaps we shouldn’t be perfect Bayesian agents after all, because that’s not in our nature, given that normally Bayesian reasoning is too hard for us).

Wednesday, August 1, 2018

"Commitment": Phenomenology at the rock wall

If you watch people rock climbing enough (in my case, only in the gym, as I have seen disturbing outdoor climbing safety numbers, while gym climbing safety numbers are excellent), you will hear a climber get advised to “commit” more. The context is usually a dynamic move for a hold, one where the climber’s momentum is essential to getting into position to secure the hold, with the paradigm example being a literal jump. The main physiological brunt of the advice to “commit” is to put greater focused effort into jumping higher, reaching further, grabbing more strongly, etc. But the phenomenological brunt of the advice is to will more strongly, with greater, well, commitment. And sometimes when one misses a move, one feels the miss as due to a lack of commitment, a failure to will strongly enough.

While once I heard someone at the gym say “Commit like you’re married to it”, the notion of commitment here seems quite different from the ordinary notion tied to relationships and long-term projects. The most obvious difference is that of time. In the ordinary case, a central component of commitment is a willingness to stick to something for an extended period of time. The climber’s “commitment” lasts at most a second or two. This results in what seems to be a qualitatively different phenomenology, but it could still be that the difference is quantitative, much as living through a week and living through a second only feels qualitatively different.

But there seems to be a more obviously qualitative difference. The rock-climbing sense of “commit” is essentially occurrent: there is an actual expending of effort. But the ordinary sense is largely dispositional: one would expend the effort if it were called for. Moreover, the rock-climbing sense of the word is typically tied to near-maximal effort, while in the ordinary sense one counts as committed to a project as long as one is willing to expend a reasonable amount of effort. In other words, when it would be unreasonable to expend a certain degree of effort, in the ordinary sense of the word one is not falling short of commitment: the employee unwilling to sacrifice a marriage to the job is not short on commitment to the job. The rock-climbing sense of commitment is not tied to reasonableness: a climber who holds back on a move out of a reasonable judgment that near-maximal effort would be too likely to result in an injury is failing to commit on the move—and typically is doing the right thing under the circumstances (of course in both sense of the word “commit”, there are times when failure to commit is the right thing to do).

Finally, the ordinary sense of divides into normative and non-normative commitment. Normative commitment is a kind of promise—implicit perhaps—while non-normative commitment is an actual dispositional state. Each can exist without the other (though it is typically vicious when the normative exists without the dispositional). In the climbing case, normally the normative component is missing: one hasn’t done anything promise-like.

Here is a puzzle. Bracket the cases where one holds back to avoid an over-use or impact injury (I would guess, without actually looking up the medical data, that when one is expanding more effort, one is more tense and injury is likely to be worse). One also understands why someone might fail to commit to a job or a relationship, in either the normative or the non-normative sense: a better thing might come one’s way. But when one is in the middle of a strenous climbing move, one typically isn’t thinking that one might have something better one could do with this second of one’s time. So: Why would someone fail to commit?

My phenomenology answers in two ways. First and foremost, fear of failure. This is rationally puzzling. One knows that a failure to commit to a climbing move increases the probability of failure. So at first sight, it seems like someone who goes to a dog show out of a fear of dogs (which is different from the understandable case of someone who goes to a dog show because of a fear of dogs, e.g., in order to fight the fear or in order to have a scary experience). But I think there is actually something potentially rational here. There are two senses of failure. One sense is external: one is failing to meet some outside standard. The second sense is internal to action: one is failing to do what one is trying to do. The two can be at cross-purposes: if I have decided to throw a table tennis match, my gaining a point can be a failure in the action-internal sense but is a success in the external sense.

In climbing, outside of competitive settings, it is the action-internal sense that tends to be more salient: we set our own goals, and what constitutes them as our goals is our willing of them. Is my goal to climb this route, to see how hard it is, or just to get some exercise? It’s all in what I am trying to do.

But in the action-internal sense, generally the badness of a failure increases with the degree to which one is trying. If I am not trying very hard, my failure is not very bad in the action-internal sense. (Of course, in some cases, my failure to try very hard might bad in some external sense, even a moral one—but it might not be.) So by trying less hard, one is minimizing the badness of a failure. There is a complex rational calculus here, whether or not one takes into account risk averseness. It is easy to see how one might decide, correctly or not, that giving significantly less than one’s greatest effort is the best option (and this is true even without risk averseness).

The secondary, but also interesting, reason that my phenomenology gives for a refusal to commit is that effort can be hard and unpleasant.