Showing posts with label error. Show all posts
Showing posts with label error. Show all posts

Wednesday, May 14, 2025

Semantics of syntactically incorrect language

As anyone who has talked with a language-learner knows, syntactically incorrect sentences often succeed in expressing a proposition. This is true even in the case of formal languages.

Formal semantics, say of the Tarski sort, has difficulties with syntactically incorrect sentences. One approach to saving the formal semantics is as follows: Given a syntactically incorrect sentence, we find a contextually appropriate syntactically correct sentence in the vicinity (and what counts as vicinity depends on the pattern of errors made by the language user), and apply the formal semantics to that. For instance, if someone says “The sky are blue”, we replace it with “The sky is blue” in typical contexts and “The skies are blue” in some atypical contexts (e.g., discussion of multiple planets), and then apply formal semantics to that.

Sometimes this is what we actually do when communicating with someone who makes grammatical errors. But typically we don’t bother to translate to a correct sentence: we can just tell what is meant. In fact, in some cases, we might not even ourselves know how to translate to a correct sentence, because the proposition being expressed is such that it is very difficult even for a native speaker to get the grammar right.

There can even be cases where there is no grammatically correct sentence that expresses the exact idea. For instance, English has a simple present and a present continuous, while many other languages have just one present tense. In those languages, we sometimes cannot produce an exact grammatically correct translation of an English sentence. One can use some explicit markers to compensate for the lack of, say, a present continuous, but the semantic value of a sentence using these markers is unlikely to correspond exactly to the meaning of the present continuous (the markers may have a more determinate semantics than the present continuous). But we can imagine a speaker of such a language who imitates the English present continuous by a literal word-by-word translation of “I am” followed by the other language’s closest equivalent to a gerund, even when such translation is grammatically incorrect. In such a case, assuming the listener knows English, the meaning may be grasped, but nobody is capable of expressing the exact meaning in a syntactically correct way. (One might object that one can just express the meaning in English. But that need not be true. The verb in question may be one that does not have a precise equivalent in English.)

Thus we cannot account for the semantics of syntactically incorrect sentences by applying semantics to a syntactically corrected version. We need a semantics that works directly for syntactically incorrect sentences. This suggests that formal semantics are necessarily mere approximate models.

Similar issues, of course, arise with poetry.

Wednesday, December 11, 2024

Correction to "Goodman and Quine's nominalism and infinity"

In an old post, I said that Goodman and Quine can’t define the concept of an infinite number of objects using their logical resources. Allen Hazen corrected me in a comment in the specific context of defining infinite sentences. But it turns out that I wasn’t just wrong about the specific context of defining infinite sentences: I was almost entirely wrong.

To see this, let’s restrict ourselves to non-gunky worlds, where all objects are made of simples. Suppose, further, that we have a predicate F(x) that says that an object x is finite. This is nominalistically and physicalistically acceptable by Goodman and Quine’s standards: it states a physical feature of a physical object, namely its size qua made of simples. (If the simples all have some finite amount of energy with some positive minimum, F(x) will be equivalent to saying x has a finite energy.)

Now, this doesn’t solve the problem by itself. To say that an object x is finite is not the same as saying that the number of objects with some property is finite. But I came across a cute little trick to go from one to the other in the proof of Proposition 7 of this paper. The trick transposed to the non-gunky mereological setting is this. Then following two statements are equivalent in non-gunky worlds satisfying appropriate mereological axioms:

  1. The number of objects x satisfying G(x) is finite.

  2. There is a finite object z such that for any objects x and y with G(x) and G(y), if x ≠ y, then x and y differ inside z (i.e., there is a part of z that is a part of one object but not of the other).

To see the equivalence, suppose (2) is true. Then if z has n simples, and if x is any object satisfying G(x), then all objects y satisfying G(x) differ from x within these n simples, so there are at most 2n objects satisfying G(x). Conversely, if there are finitely many satisfiers of G, there will be a finite object z that contains a simple of difference between x and y for every pair of satisfiers x and y of G (where a simple of difference is a simple that is a part of one but not the other), and any two distinct satisfiers of G will differ inside z.

I said initially that I was almost entirely wrong. In thoroughly gunky worlds, all objects are infinite in the sense of having infinitely many parts, so a mereologically-based finiteness predicate won’t help. Nor will a volume or energy-based one, because we can suppose a gunky world with finite total volume and finite total energy. So Goodman and Quine had better hope that the world isn’t thoroughly gunky.

Thursday, August 18, 2022

Error in "Non-classical probabilities invariant under symmetries"

Yesterday, I discovered an error in the proof of “Theorem 1” of this recent paper of mine (arxiv version). The error occurs in the harder direction of Lemma 2. I do not know how to fix the error. Here’s what I know to remain of the “Theorem”. The proof that (i) implies (ii)–(v) is unaffected. The proof that (iv) implies (ii)–(v) is also unaffected, and likewise unaffected is the equivalence of (ii), (iii) and (v).

But I no longer know if any of (ii)–(v) imply (i). However, (i) is true under the stronger assumption that G is supramenable or that there exist invariant hyperreal probabilities.

The above remarks suffice for almost all the philosophical points in the paper (the philosophical point that behavior for countable sets is decisive is no longer supported in the full conditional probability case), and all the applications I mention in the paper.

I do not know if “Theorem 1” is true. This is an interesting mathematical question.

Update: The error has been fixed and Theorem 1's proof now works.

Sunday, June 28, 2020

Pluralism in public life

Consider this formulation of the central problem of a pluralist democracy:

  1. How to have a democracy where there is a broad plurality of sets of values?

Assuming realism about the correct set of values, this is roughly equivalent to:

  1. How to have a democracy where most people are wrong in different ways about the values?

But when we think about (1) and (2), we are led to thinking about the problem in different ways. Formulation (1) leads us to think the problem is with the state, which should somehow accommodate itself to the plurality of values. Formulation (2) points us, however, to the idea that the problem is with the people (including perhaps ourselves) who have the wrong set of values.

My own view is that there is partial but incomplete realism about values. Specifically, there is such a thing as the correct set of values. But there is a legitimate plurality of rankings between the values, though even there not everything goes—some rankings violate human nature. As a result, the problem is both with us, in that most of us have the wrong set of values and have some prioritizations that violate human nature, and with the state which needs to accommodate a legitimate plurality of prioritizations.

Friday, March 1, 2019

Between subjective and objective obligation

I fear that a correct account of the moral life will require both objective and subjective obligations. That’s not too bad. But I’m also afraid that there may be a whole range of hybrid things that we will need to take into account.

Let’s start with clear examples of objective and subjective obligations. If Bob promised Alice to give her $10 but I misremember the promise and instead thinks he promised never to give her any more, then:

  1. Bob is objectively required to give Alice $10.

  2. Bob is subjectively required not to give Alice any money.

These cases come from a mistake about particular fact. There are also cases arising from mistakes about general facts. Helmut is a soldier in the Germany army in 1944 who knows the war is unjust but mistakenly believes that because he is a soldier, he is morally required to kill enemy combatants. Then:

  1. Helmut is objectively required to refrain from shooting Allied combatants.

  2. Helmut is subjectively required to kill Allied combatants.

But there are interesting cases of mistakes elsewhere in the reasoning that generate curious cases that aren’t neatly classified in the objective/subjective schema.

Consider moral principles about what one should subjectively do in cases of moral risk. For instance, suppose that Carl and his young daughter are stuck on a desert island for the next three months. The island is full of chickens. Carl believes it is 25% likely that chickens have the same rights as humans, and he needs to feed his daughter. His daughter has a mild allergy to the only other protein source on the island: her eyes will sting and her nose run for the next three months if she doesn’t live on chicken. Carl thus thinks that if chickens have the same rights as humans, he is forbidden from feeding chicken to his daughter; but if they don’t, then he is obligated to feed chicken to her.

Carl could now accept one of these two moral risk principles (obviously, these will be derivative from more general principles):

  1. An action that has a 75% probability of being required, and a 25% chance of being forbidden, should always be done.

  2. An action that has a 25% probability of being forbidden with a moral weight on par with the prohibition on multiple homicides and a 75% probability of being required with a moral weight on par with that of preventing one’s child’s mild allergic symptoms for three months should never be done.

Suppose that in fact chickens have very little in the way of rights. Then, probably:

  1. Carl is objectively required to feed chicken to his daughter.

Suppose further that Carl’s evidence leads him to be sure that (5) is true, and hence he concludes that he is required to feed chicken to his daughter. Then:

  1. Carl is subjectively required to feed chicken to his daughter.

This is a subjective requirement: it comes from what Carl thinks about the probabilities of rights, moral principles about what what to do in cases of risk, etc. It is independent of the objective obligation in (7), though in this example it agrees with it.

But suppose, as is very plausible, that (5) is false, and that (6) is the right moral principle here. (To see the point, suppose that he sees a large mammal in the woods that would suffice to feed his daughter for three months. If the chance that that mammal is a human being is 25%, that’s too high a risk to take.) Then Carl’s reasoning is mistaken. Instead, given his uncertainty:

  1. Carl is required to to refrain from killing chickens.

But what kind of an obligation is (9)? Both (8) and (9) are independent of the objective facts about the rights of chickens and depend on Carl’s beliefs, so it sounds like it’s subjective like (8). But (8) has some additional subjectivity in it: (8) is based on Carl’s mistaken belief about what his obligations are in cases of mortal risk, while (9) is based on what Carl’s obligations (but of what sort?) “really are” in those cases.

It seems that (9) is some sort of a hybrid objective-subjective obligation.

And the kinds of hybrid obligations can be multiplied. For we could ask about what we should do when we are not sure which principle of deciding in circumstances of moral risk we should adopt. And we could be right or we could be wrong about that.

We could try to deny (9), and say that all we have are (7) and (8). But consider this familiar line of reasoning: Both Bob and Helmut are mistaken about their obligations; they are not mistaken about their subjective obligations; so, there must be some other kinds of obligations they are mistaken about, namely objective ones. Similarly, Carl is mistaken about something. He isn’t mistaken about his subjective obligation to feed chicken. Moreover, his mistake does not rest in a deviation between subjective and objective obligation, as in Bob’s and Helmut’s case, because in fact objectively Carl should feed chicken to his daughter, as in fact (I assume for the sake of the argument) chickens have no rights. So just as we needed to suppose an objective obligation that Bob and Helmut got wrong, we need a hybrid objective-subjective one that Carl got wrong.

Here’s another way to see the problem. Bob thinks he is objectively obligated to give no money to Alice and Helmut thinks he is objectively obligated to kill enemy soldiers. But when Carl applies (5), what does he come to think? He doesn’t come to think that he is objectively required to feed chicken to his daughter. He already thought that this was 75% likely, and (5) does not affect that judgment at all. It seems that just as Bob and Helmut have a belief about something other than mere subjective obligation, Carl does as well, but in his case that’s not objective obligation. So it seems Carl has to be judging, and doing so incorrectly, about some sort of a hybrid obligation.

This makes me really, really want an account of obligation that doesn’t involve two different kinds. But I don’t know a really good one.

Friday, April 6, 2018

Peer disagreement and models of error

You and I are epistemic peers and we calculate a 15% tip on a very expensive restaurant bill for a very large party. As shared background information, add that calculation mistakes for you and me are pretty much random rather than systematic. As I am calculating, I get a nagging feeling of lack of confidence in my calculation, which results in $435.51, and I assign a credence of 0.3 to that being the tip. You then tell me that you you’re not sure what the answer is, but that you assign a credence of 0.2 to its being $435.51.

I now think to myself. No doubt you had a similar kind of nagging lack of confidence to mine, but your confidence in the end was lower. So if all each of us had was their own individual calculation, we’d each have good reason to doubt that the tip is $435.51. But it would be unlikely that we would both make the same kind of mistake, given that our mistakes are random. So, the best explanation of why we both got $435.51 is that we didn’t make a mistake, and I now believe that $435.51 is right. (This story works better with larger numbers, as there are more possible randomly erroneous outputs, which is why the example uses a large bill.)

Hence, your lower reported credence of 0.2 not only did not push me down from my credence of 0.3, but it pushed me all the way up into the belief range.

Here’s the moral of the story: When faced with disagreement, instead of moving closer to the other person’s credence, we should formulate (perhaps implicitly) a model of the sources of error, and apply standard methods of reasoning based on that model and the evidence of the other’s credence. In the case at hand, the model was that error tends to be random, and hence it is very unlikely that an error would result in the particular number that was reported.

Friday, February 10, 2017

Measurement error

Let’s say that I am in the lab and I am measuring some unknown value U. My best model of the measurement process involves a random additive error E independent of U, with E having some known distribution, say a Gaussian of some particular standard deviation (perhaps specified by the measurement equipment manufacturer) centered around zero. The measurement gives the value 7.3. How should I now answer probabilistic questions like: “How likely is it that U is actually between 7.2 and 7.4?”

Here’s how this is sometimes done in practice. We know that U = 7.3 − E. Then we say that the probability that U is, say, between 7.2 and 7.4 is the same as the probability that E is between −0.1 and 0.1, and we calculate the latter probability using the known distribution of E.

But this is an un-Bayesian way of proceeding. We can see that from the fact that we never said anything about our priors regarding U, and for a Bayesian that should matter. Here’s another way to see the mistake: When I calculated the probability that U was between 7.2 and 7.4, I used the prior distribution of E. But to do that neglects data that I have received. For instance, suppose that U is the diameter of a human hair that I have placed between my digital calipers. And the calipers show 7.3 millimeters. What is the probability that the hair really has a diameter between 7.2 and 7.4 millimeters? It’s vanishingly small! That would be just an absurdly large diameter for a hair. Rather, the fact that the calipers show 7.3 millimeters shows that E is approximately equal to 7.3 millimeters. The posterior distribution of E, given background information on human hair thickness, is very different from the prior distribution.

Yet the above is what one does in practice. Can one justify that practice? Yes, in some cases. Generalize a little. Let’s say we measure the value of U to be α, and we want to know the posterior probability that U lies in some set I. This probability is:

P(U ∈ I|U + E = α)=P(α − E ∈ I|U + E = α).

Now suppose that E has a certain maximum range, say, from −δ to δ. (For instance, there is no way that digital with four digits can show more than 9999 or less than −9999.) And suppose that U is uniformly distributed over the region from α − δ to α + δ, i.e., its distribution over that region is perfectly flat. In that case, it’s easy to see that E and U + E = α are actually statistically independent. Thus:

P(U ∈ I|U + E = α)=P(α − E ∈ I).

And so in this case our initial naive approach works just fine.

In the original setting, if for instance we’re completely confident that E cannot exceed 0.5 in absolute value, and our prior distribution for U is flat from 6.8 to 7.8, then the initial calculation that the probability that U is between 7.2 and 7.4 equals the prior probability that E is between −0.1 and 0.1 stands. (The counterexample then doesn’t apply, since in the counterexample we had the possibility, now ruled out, that E is really big.)

The original un-Bayesian way of approaching basically pretended that U was (per impossibile) uniformly distributed over the whole real line. When U is close to uniformly distributed over a large salient portion of the real line, the original way kind of works.

The general point goes something like this: As long as the value of E is approximately independent of whether U + E = α, we can approximate the posterior distribution of E by its prior and all is well. In the case of the hair measurement, E was not approximately independent of whether U + E = 7.3, since if U + E = 7.3, then very likely E is enormous, but I assume E isn’t in other cases very likely to be enormous.

This is no doubt stuff well-known to statisticians, but I’m not a statistician, and it’s clarified some things for me.

The naive un-Bayesian calculation I gave at the beginning is precisely the one that I used in my previous post when adjusting for errors in the evaluation of evidence. But an appropriate flatness of prior distribution assumption can rescue the calculations in that post.

Tuesday, September 1, 2015

Culpably mistaken conscience

It is plausible that we have duties of conscience arising from inculpable mistakes about what we should do. I shall assume this and argue that culpable mistakes also yield duties of conscience.

Here are two cases.

  1. Fred hires a neurologist to brainwash him into a state which will make him think the next day that it is his duty to embezzle money from his employer. The neurologist succeeds. The next day Fred conscientiously believes he has a duty to embezzle money from his employer. But he refrains from doing so out of fear of being caught.
  2. Sally hires a neurologist to brainwash her into which will make him think the next day that it is her duty to embezzle money from her employer. The neurologist fails. But that night, completely coincidentally, a rogue neurologist breaks into her home and while she's sleeping successfully brainwashes her into that very state the first neurologist failed to brainwash her into. The next day Sally conscientiously believes she has a duty to embezzle money from her employer. But she refrains from doing so out of fear of being caught. There are no further relevant differences between Sally's case and Fred's.

Fred is responsible for his conscience being mistaken. Sally is not responsible for that. Granted, Sally is culpable for trying to make her conscience be mistaken, but she is no more responsible for the mistaken conscience than the attempted murderer is responsible when her intended victim is coincidentally killed by someone else.

If inculpably mistaken conscience gives rise to duties, Sally has a duty of conscience to embezzle, and she fails in her duty. She thus acted immorally on both days: on the first day she acted immorally by asking to be brainwashed and on the second day she acted immorally by refusing to obey her conscience.

Thus:

  1. If culpably mistaken conscience does not give rise to duties, then Fred has not violated a duty of conscience by refraining from embezzling, while Sally has.
If culpably mistaken conscience does not give rise to duties, then Sally is in a morally worse state than Fred, being guilty of two things while Fred is only guilty of one.

But on the other hand, Fred and Sally have made all the same relevant decisions in the same subjective states. The only possibly relevant difference is entirely outside of them--namely, whether the neurologist that they actually hired is in fact the neurologist who brainwashed them. But the whole point of the idea of duties of conscience is to honor the subjective component in duty, and so if Fred and Sally's relevant decisions are all relevantly alike, Fred and Sally will also be alike in whether they've violated a duty of conscience. Hence:

  1. If Sally has violated a duty of conscience by refraining from embezzling, so has Fred.
It logically follows from (3) and (4) that:
  1. Culpably mistaken conscience gives rise to duties.
Of course all of this argument was predicated on the assumption that inculpably mistaken conscience gives rise to duties, and perhaps a reader may want to now revisit that assumption. But I think the assumption is true, leaving us with the conclusion that mistaken conscience gives rise to duties whether or not the mistake is culpable.

Now let's turn the case about. Suppose that both Fred and Sally follow their respective mistaken consciences and therefore embezzle. What should we say? Should we say that they did nothing wrong? It seems we shouldn't say that they did nothing wrong, for if they did nothing wrong then their consciences weren't mistaken, which they were. So let's accept (though I have a long-shot idea that I've talked about elsewhere that might get out of this) that they both did wrong. Thus, as in Mark Murphy's account of conscience, they were in the unhappy position that whatever they did would be wrong: by embezzling they defraud their employer and by not embezzling they violate their conscience.

But what about their culpability? Since Sally's case is one of inculpable ignorance, we have to say that Sally is not culpable for the embezzlement. Let's further suppose Sally and Fred's reasons for having themselves brainwashed were to get themselves to embezzle. Thus Sally is guilty of entering on a course of action intended to lead to embezzlement--basically, attempted embezzlement. But she's not guilty of embezzlement. What about Fred? He is certainly responsible for the embezzlement: it was intentionally caused by his immoral action of hiring the neurologist. But I am inclined to think that this is an effect-responsibility ("liability" is a good word) rather than action-culpability. Fred is responsible for the embezzlement in the way that one is responsible for the intended effects of one's culpable actions, in this case the action of hiring a brainwasher, but he isn't culpable for it in the central sense of culpability. (Compare: Suppose that instead of hiring a neurologist to brainwash himself, he hired the second brainwasher in Sally's case. Then Fred wouldn't be action-culpable for Sally's embezzlement, since one is only action-culpable for what one does, but only responsible for her embezzlement as an intended effect of his action.) Sally lacks that responsibility for the effect--the embezzlement--because her plan to get herself to embezzle the money failed as the embezzlement was caused by the rogue neurologist.

In terms of moral culpability for their actions, in the modified case where they conscientiously embezzle, Fred and Sally are, I think, exactly on par. Each is morally culpable precisely for hiring the neurologist, and that's all. That may seem like it gets them off the hook too easily, but it does not: they did something very bad in hiring the brainwasher. So, if I'm right, they are on par if they both conscientiously embezzle and they are on par if they both violate their consciences by refusing to embezzle.

Saturday, July 18, 2015

Two ways to be mistaken about what to do?

Joe is baking cupcakes for Sally. He has an inculpable false belief that cyanide is a tasty nutritional supplement and so adds it to the cupcakes to improve the taste, indeed killing Sally. Sally was innocent.

Fred is baking cupcakes for Samantha. He has an inculpable false belief--perhaps acquired through brainwashing--that it is right to kill any people one likes for fun and so adds cyanide to the cupcakes to kill Samantha for fun, and indeed he kills her. Samantha was innocent.

Neither Joe nor Fred are culpable (though in practice it would take a lot of convincing to make us agree that their beliefs were inculpable). But I feel that there is a difference in the two cases. I am inclined to say that Fred acted wrongly, though not culpably, while I am less inclined to say the same thing about Joe. What's the relevant difference?

Well, one difference is that Joe acts because of an inculpably false empirical belief while Fred acts because of an inculpably false normative belief. But that difference doesn't seem the relevant one to me. We could imagine cases where being wrong about normative matters (say, about whether a particular person one is punishing has acted wrongly) leads to an error more like Joe's than Fred's.

So what's the difference? Well one difference that does seem relevant to me is this. Both Joe and Fred are killing innocent people. But Joe doesn't act under the description killing an innocent person while Fred does. Thus Fred acts under a description that entails the wrongness of the action (assuming that it's necessarily always wrong to kill an innocent) while Joe does not. I think this is getting close, but doesn't quite get at the difference. Suppose Joe does something that he knows to be is a killing if and only if some complex mathematical statement p is true, and Joe is inculpably sure that the statement is false, though it's actually necessarily true. It may well be that Joe is acting under the description killing an innocent person if and only if p and an action's falling under this description does entail the wrongness of the action.

Maybe the right tool for distinguishing the two cases is intention? Joe doesn't intentionally kill. Fred does. That's certainly a very relevant difference. But we can imagine a case like Joe's where there is intentional killing. Suppose James is a law enforcement officer with the inculpable false belief that Suzy is trying to kill innocent people. (Perhaps he's wondered on a movie set where Suzy is playing a mass shooter with great plausibility and superb special effects.) Then James intentionally kills Suzy, but his error seems much more like Joe's than like Fred's. Maybe we can, however, say that James is not intentionally killing an innocent, while Fred is. But that could be a misunderstanding of Fred's intentions as far as my description goes. The story can be elaborated so Fred is no more intending to kill an innocent than if I shake hands with you I am intending to shake hands with someone wearing a green shirt (assuming you're obviously wearing a green shirt). Your wearing a green shirt just doesn't enter into my intentions, and we can suppose that Fred gets no special pleasure out of the innocence of the person he kills, so that innocence doesn't enter into his intentions.

Nonetheless, perhaps we can say this: It is always wrong to intend to kill someone. It is not always wrong to intend to kill an aggressor. But when a person virtuously intends to kill an aggressor, maybe she doesn't automatically intend to kill this person. Rather, she intends to kill this aggressor. This person's being an aggressor suffuses her intentions. Thus James who kills Suzy the apparent aggressor doesn't have the intention to kill Suzy or to kill a person. He has the intention to kill Suzy the aggressor, an intention that he fails to fulfill. If that's right, then we can say that both Joe and James act under a morally upright intention: to flavor cupcakes an to kill an aggressor, respectively.

I am worried about this solution, though. It may require more to be packed into morally upright intentions than is psychologically realistic. After all, it's not right to kill an aggressor as such. It's only right to kill an aggressor who threatens significant harm and cannot be stopped in non-lethal ways and there are surely lots of other conditions (e.g., Aquinas thinks that only officers of the state have the right to intentionally kill--we can defend ourselves in unintentionally lethal ways, he allows, however). Should we pack all of all these conditions into James' intention? Maybe James can summarize mentally. He intends to rightly kill this aggressor. And rightly killing this aggressor is an intention that has the property that necessarily an action that fulfills it is right. However, the very same thing could be said about Fred's case. Given Fred's belief that it's right to kill for fun, Fred could be intending to rightly kill Samantha.

I wish I had a satisfactory resolution. Maybe we don't need the upright intention to entail rightness. Maybe all James needs is the intention to kill an aggressor, even though not all cases of killing an aggressor are right?

I really don't know. As I think about cases like this, I wonder how sharp the distinction between Joe and James, on the one hand, and Fred, on the other, really is. Maybe all we have is a vague distinction that Joe and James' errors do not constitute them as morally corrupt, while Fred's error does constitute him as morally corrupt (even if he is not culpable for this moral corruption). But that distinction doesn't cut quite the line we want. Take my mathematical case and change p into a moral proposition. Suppose Joe has the inculpable false belief that theft is right and intends that Sally die if and only if theft is wrong. Then Joe's root error does constitute him as morally corrupt, but he's still not like Fred. Nonetheless, even though the distinction between beliefs that make one morally corrupt and those that don't may not cut the exact line we want, maybe that's the only non-gerrymandered line to be cut here? I really want to say that Joe and James both did something that we wish they hadn't done, while Fred did something wrong, even though all three were non-culpable. But I don't know if I can support this distinction.

Tuesday, December 13, 2011

More on Spinoza on error

Spinoza's main theory of intentionality is simple. What is the relationship between an idea and what it represents? Identity. An idea is, simply, identical with its ideatum. What saves this from being a complete idealism is that Spinoza has a two-attribute theory to go with it. Thus, an idea is considered under the attribute of thought, while its ideatum is, often, considered under the attribute of extension. Thus, the idea of my body is identical with my body, but when we talk of the "idea" we are conceiving it under the attribute of thought, and when we talk of "body" we are conceiving it under the attribute of extension.

But there is both a philosophical and a textual problem for this, and that is the problem of how false ideas are possible. Since presumably an idea is true if and only if what it represents exists, and an idea represents its ideatum, and its ideatum is identical with it, there are no false ideas, it seems. The philosophical problem is that there obviously are! The textual problem is that Spinoza says that there are, and he even gives an account of how they arise. They arise always by privation, by incompleteness. Thus, to use one of Spinoza's favorite examples, consider Sam who takes, on perceptual grounds, the sun to be 200 feet away. Sam has the idea of the sun impressing itself on his perceptual faculties as if it were 200 feet away, but lacks the idea that qualifies this as a mere perception. When we go wrong, our ideas are incomplete by missing a qualification. It is important metaphysically and ethically to Spinoza that error have such a privative explanation. But at the same time, this whole story does not fit with the identity theory of representation. Sam's idea is identical with its ideatum. It is, granted, confused, which for Spinoza basically means that it is abstracted, unspecific, like a big disjunction (the sun actually being 200 feet away and so looking or the sun actually being 201 feet away and looking 200 feet away or ...).

Here is a suggestion how to fix the problem. Distinguish between fundamental or strict representation and loose representation. Take the identity theory to be an account of strict representation. Thus, each idea strictly represents its ideatum and even confused ideas are true, just not very specific. An idea is then strictly true provided that its ideatum exists, and every idea is strictly true. But now we define a looser sense of representation in terms of the strict one. If an idea is already specific, i.e., adequate (in Spinoza's terminology) or unconfused, then we just say that it loosely represents what it strictly represents. But:

  • When an idea i is unspecific, then it loosely represents the ideatum of the idea i* that is the relevant specification of i when there is a relevant specification of i. When there is no relevant specification of i, then i does not loosely represent anything.
Here, we may want to allow an idea to count as its own specification—that will be an improper specification. When an idea is its own relevant specification, then the idea loosely represents the same thing as it strictly represents, and it must be true. I am not sure Spinoza would allow a confused idea to do that. If he doesn't, then we have to say that specification must be proper specification—the specifying idea must be more specific than what it specifies, it must be a proper determinate of the determinable corresponding to the unspecific idea i.

An idea, then, is loosely true provided that it loosely represents something. Otherwise, it is loosely false. Error is now possible. For there may not exist an actual relevant specifying idea. Or, to put it possibilistically, the relevant specification may be a non-actual idea.

What remains is to say what the relevant specification is. Here I can only speculate. Here are two options. I am not proposing either one as what Spinoza might accept, but they give the flavor of the sorts of accounts of relevance that one might give.

  1. A specification i* of i is relevant provided that the agent acts as if her idea i were understood as i*.
  2. A specification i* of i is relevant provided that most of the time when the agent has had an idea relevantly like i the ideatum of an idea relevantly like i* exists (i.e., an idea relevantly like i* exists), and there is no more specific idea than i* that satisfies this criterion (or no more specific idea than i* satisfies this criterion unless it is significantly more gerrymandered than i*?).
I think Spinoza would be worried in (1) about the idea of acting as if a non-existent idea were believed. This is maybe more Wittgensteinian than Spinozistic. I think (2) isn't very alien to Spinoza, given what he says about habituation.

Loose truth and loose representation may be vague in ways that strict truth and strict representation are not. The vagueness would come from the account of relevant specification.

I don't know that Spinoza had a view like I sketch above. But I think it is compatible with much of what he says, and would let him hold on to the insight that fundamental intentionality is secured by identity, while allowing him to say that privation makes error possible by opening up the way for ideas which are sufficiently inspecific in such a way that they have no correct relevant specification.

Friday, July 23, 2010

Deep Thoughts XXVI

You can't be wrong if you have no opinions.

Friday, August 8, 2008

Thick ends

In an earlier post, I offered the hypothesis that an action's end already includes the means under some description, perhaps specific or perhaps general. If this hypothesis is right, then we get an interesting simplification of moral evaluation. Traditionally, deontologists have had to evaluate both the end and its means. But if the means are built into the end, then one of the steps in moral evaluation is removed.

How might one do this? Well, one might say that regardless of what E is, the end "E by any means" is the wrong end to pursue. Why? Because the specified means include morally illegitimate means. If this is right, then a virtuous person only wills ends like "E by any legitimate means" or "E by means of morally licit training" or "E by means of pressing the seventh button on the left when this is permissible".

This has some interesting consequences. Suppose that x is a virtuous agent who erroneously believes that a particular means m to E is morally licit. Then, x being virtuous wills "E by the licit means m" or something like that, and hence when x executes m to gain E, she fails to achieve her end, since her end is not just E but "E by the licit means m". Hence, we can say something about what goes wrong when someone in good conscience does wrong, or at least does wrong in this way: she fails to achieve what she was trying to achieve.

In such cases, we get a different way of answering a puzzle that Cardinal Ratzinger raises: Why not count as fortunate people who in good conscience act wrongly? Aren't they lucky that their conscience leads them astray, since as a result they are non-culpable in their wrongdoing? (Ratzinger's solution was that errors of conscience are preceded by earlier sins that led to the errors.) The solution is that, at least in cases of inappropriate means, the virtuous person who in erring conscience does wrong is one who fails to achieve her ends, though she may erroneously think she succeeds. But a failure to achieve one's ends is surely an unfortunate thing, and hence we cannot count this person entirely lucky. True, it is better to fail innocently than to clearheadedly do wrong and be culpable, but it is clearly better yet to do clearheadedly succeed at doing right.

If one can extend the theory to include the circumstances in the ends, we achieve a further theoretical simplification.

Of course, as in all simplifications, one runs the risk of losing some important distinctions when one does these things.

Tuesday, July 15, 2008

Deep Thoughts XIII

You can never go wrong[note 1] believing a tautology.