Showing posts with label sentences. Show all posts
Showing posts with label sentences. Show all posts

Wednesday, January 30, 2019

Justification and units of assertion

It’s clear to me that each of two assertions could individually meet the evidential bar for assertibility, but that their conjunction, being typically less probable than either conjucnt, might not. But then there is something very strange about the idea that one could justifiably assert “S1. S2.” but not “S1 and S2.” After all, is there really a difference in what one is saying when one inserts a period and when one inserts an “and”?

Perhaps the thing to say is that the units of assertion are in practice not single sentences, but larger units. How large? Well, not whole books. Plainly, as the preface paradox notes, one can be justified in producing a book while thinking there is an error somewhere in it (as long as one does not know where the error lies). I think not whole articles, either. Again, we expect to be mistaken somewhere in a complex article. Perhaps the unit of assertion is something more of the order of a paragraph or less, but more than a sentence.

If so, then in typical cases “S1. S2.” will be a single unit of assertion, and to be justified in asserting the unit, one needs to be justified in the conjunction. This gives us a pretty precise definition of a unit of assertion: a unit of assertion is an assertoric locution that is lengthwise maximal with respect to needing to be justified.

What in practice determines the unit of assertion is probably determined by a mix of content, context, intonation, length of pauses, etc. For instance, a topic switch is apt to end a unit of assertion, and it may sometimes make a difference how long the pause between the sentences in “S1. S2.” with respect to whether the sentences form a single unit of assertion.

Surely people have written on this.

Wednesday, September 23, 2015

An argument against heavy-weight Platonism

Heavy-weight Platonism explains (or grounds) something's being green by its instantiating greenness. Light-weight Platonism refrains form making such an explanatory claim, restricting itself to saying that something is green if and only if it instantiates greenness. Let's think about a suggestive argument against heavy-weight Platonism.

It would be ad hoc to hold the explanatory thesis for properties but not for relations. The unrestricted heavy-weight Platonist will thus hold that for all n>0:

  1. For any any n-ary predicate F, if x1,...,xn are F, this is because x1,...,xn instantiate Fness.
(One might want to build in an ad hoc exception for the predicate "instantiates" to avoid regress.) But just as it was unlikely that the initial n=1 case would hold without the relation cases, i.e., the n>1 cases, so too:
  1. If (1) holds for each n>0, then it also holds for n=0.
What is the n=0 case? Well, a 0-ary predicate is just a sentence, a 0-ary property is a proposition, the "-ness" operator when applied to a sentence yields the proposition expressed by the sentence, and instantiation in the 0-ary case is just truth. Thus:
  1. If (1) holds for each n>0, then for any sentence s, if s, then this is because because of the truth of the proposition that s.
(The quantification is substitutional.) For any sentence s, let <s> be the proposition that s. The following is very plausible:
  1. For any sentence s, if s, then <s> is true because s.
But (4) conflicts with (3) (assuming some sentence is true). In fact, to generate a problem for (3), we don't even need (4) for all s just for some, and surely the proposition <The sky is blue> is true because the sky is blue, rather than the other way around: the facts about the physical world explain the relevant truth facts about propositions. Thus:
  1. It is false that (1) holds for each n>0.

The above argument is compatible, however, with a restricted heavy-weight Platonism on which sometimes instantiation facts explain the possession of attributes. Perhaps, for instance, if "is green" is a fundamental predicate, then Sam is green because Sam instantiates greenness, but this is not so for non-fundamental predicates. And maybe there are no fundamental sentences (a fundamental sentence would perhaps need to be grammatically unstructured in a language that cuts nature at the joints, and maybe a language that cuts nature at the joints will require all sentences to include predication or quantification or both, and hence not to be unstructured). If so, that would give a non-arbitrary distinction between the n>0 cases and the n=0 case. There is some independent reason, after all, to think that (1) fails for complex predicates. For instance, it doesn't seem right to say that Sam is green-and-round because he instantiates greenandroundness. Rather, Sam is green-and-round because Sam is green and Sam is round.

Monday, August 15, 2011

Implicature and lying

Philosophers say things like: "Asserting 'There is no conclusive proof that Smith is a plagiarist' implicates that there is a genuine possibility of Smith's being a plagiarist." (And yet taken literally "There is no conclusive proof that Smith is a plagiarist" is true even if no one ever suspected Smith of plagiarism.) However what one implicates not only has propositional content, but also illocutionary force, and both both the content and the force are implicated. So if we want to be more explicit, we should say something like: "Asserting 'There is no conclusive proof that Smith is a plagiarist' implicates the suggestion (or insinuation or even assertion) that there is a genuine possibility of Smith's being a plagiarist." Which of the forces--suggestion, insinuation or assertion--is the right one to choose is going to be a hard question to determine. Maybe there is vagueness (ugh!) here. In any case, we don't just implicate propositions--we implicate whole speech acts. A question can implicate an assertion and an assertion a question ("It would be really nice if you would tell me whether...").

I used to wonder whether the moral rules governing lying (which I think are very simple, namely that it is always wrong to lie, but I won't be assuming that) extend to false implicature. I now realize that the question is somewhat ill-formed. The moral rules governing lying are tied specifically to assertions, not to requests or commands. One can implicate an assertion, but one can also implicate other kinds of speech acts, and it only makes sense to ask whether the moral rules governing lying extend to false implicature when what is implicated is an assertion or assertion-like.

And I now think there is a very simple answer to the question. The moral rules governing lying do directly extend to implicated assertions. But just as these moral rules do not directly extend to other assertion-like explicit speech acts, such as musing, so too they do not directly extend to other assertion-like implicated speech acts, such as suggesting. The rules governing an implicated suggestion are different from the rules governing an explicit assertion not because the implicated suggestion is implicated, but because the implicate suggestion is a suggestion. If it were an explicit suggestion, it would be governed by the same rules.

That said, there are certain speech acts which are more commonly implicated than made explicitly--suggestion is an example--and there may even be speech acts, like insinuation (Jon Kvanvig has impressed on me the problematic nature of "I insinuate that...") that don't get to be performed explicitly (though I don't know that they can't be; even "I insinuate that..." might turn out to be a very subtle kind of insinuation in some weird context).

I think the distinction between the explicit speech act and the implicated speech act does not mark a real joint in nature. The real joint in nature is not between, say, explicit and implicated assertion, but between, say, assertion and suggestion (regardless of which, if any, is explicit or implicated). Fundamental philosophy of communication does not, I think, need the distinction between the explicit speech act and the implicated speech act. That distinction is for the linguists--it concerns the mechanics of communication (just as the distinction between written and spoken English, or between French and German) rather than its fundamental nature.

Friday, January 7, 2011

Saying

Consider this theory. To utter a sentence is to offer a description of a proposition. Sometimes, whether contingently or necessarily, no proposition meets the description. That is a case of nonsense. Sometimes more than one proposition meets the description. That is a case of vagueness (ambiguity is a case where the sentence's words fail to determine the proposition, but the sentence as a whole does—in agreement with how one might read Frege on tense, I take the context to be a part of the sentence or, in his terminology, the "expression"). Subintensional vagueness is when the propositions that meet the description necessarily have the same truth value. Intensional vagueness is when they possibly have different truth values. Extensional vagueness is when they actually have different truth values. I suspect that subintensional vagueness occurs all over the place.

How do these descriptions work? I think in a recursive way. For instance, the sentence "F(a)" describes a proposition that applies Fness to a. What does it mean to say that a proposition "applies" Fness to a? Well, we can say this: necessarily, a proposition that applies Fness to a is true if and only if a exists and has Fness. But this is not sufficient to determine the concept of application—after all, there are many propositions that satisfy this description, such as the proposition that God knows that a has Fness and the proposition that a has Fness or 2+2=5. We can lay down a few more conditions on application. A proposition that applies Fness to a is not going to be truthfunctionally complex if F is not truthfunctionally complex. (But truthfunctional complexity needs to be defined, and it is hard to do that.) The functor application from property-object pairs to propositions is natural. Maybe something can be said about explanatory priority.

But, on the present theory, there is little reason to think that filling out the above story will narrow down the notion of application, or of Fness for that matter, so far as to ensure that there is only one proposition that meets the description given by the sentence "F(a)". Similar points apply to truthfunctionally complex sentences. Thus, "s and u" describes a proposition which the output of the conjunction functor as applied to the propositions described by "s" and by "u". But we cannot give an unambiguous definition of the conjunction functor. So subintensional vagueness is everywhere.

Now we have a story to tell about the classic liar sentence: "This is a falsehood." Such a sentences attempts to describe a proposition that denies truth to itself. But there is no reason to suppose that there is such a proposition, and indeed there is a very good reason to suppose that there isn't—namely, that the supposition that there is such a proposition immediately leads to a logical contradiction (what better argument could one ask for?). (One might try Goedel numbering and diagonalization. But Goedel numbering works for sentences, not propositions, and because of subintensional vagueness, the correspondence between sentences and propositions is not one-to-one.)

The contingent liar is tougher, but perhaps some generalization of the solution will work. (It might help if some propositions are contingent beings.)

Friday, September 11, 2009

On sentence types

Sentence tokens come in many types, such as stupid sentence tokens, true sentence tokens, sentence tokens written in green ink, tokens of "Snow is white", tokens of "Snow is white" written in a serif font and in a 4pt typeface or smaller, etc. Most of these types of sentence tokens do not qualify as "sentence types". In fact, in the above, the only sentence type is tokens of "Snow is white". Types of sentence tokens are abstractions from sentence tokens. But there are different kinds and levels of abstraction, and so not all types of sentence tokens count as "sentence types".

I will argue that the notion of a sentence type is to a large extent merely pragmatic. We consider the following to each be a token of the same sentence type:

  1. Snow is white.
  2. Snow is white.
  3. Snow is white.
The difference between roman, bold and italic, as well as differences in size, are differences that do not make a difference between sentence types. Similarly, "Snow is white" as said by me with my weird Polish-Canadian accent and as said by the Queen are the same sentence types. On the other hand,
  1. Snow, it is white.
is a token of different sentence type.

Say that a difference between the appearances (visual or auditory) of tokens that does not make for a difference in sentence type is a "merely notational difference". So, the way logicians think of language is roughly something like this. First, we abstract away merely notational differences. The result of this abstraction is sentence types. Then we can do logic with sentence types, and doing logic with sentence types helps us to do other abstractions. Thus, Lewis abstracts from differences that do not affect which worlds verify the sentence, and the result are his unstructured propositions (which he, in turn, identifies with sets of worlds). Or we might abstract from differences that do not affect meaning, and get propositions. (This simplifies by assuming there are no indexicals.)

But one could do things differently. For instance, we could say that differences in typeface are not merely notational differences, but in fact make for a different sentence type. Our logic would then need to be modified. In addition to rules like conjunction-introduction and universal-elimination, we would need rules like italic-introduction and bold-elimination. Moreover, these rules do not contribute in an "interesting way" to the mathematical structures involved. (John Bell once read to me a referee's report on a paper of his. As I remember it, it was something like this: "The results are correct and interesting. Publish." There are two criteria for good work in mathematics: it must, of course, be correct but it must also be interesting.) Moreover, there will be a lot of these rules, and they're going to be fairly complicated, because we'll need a specification of what is a difference between two symbol types (say, "b" and "d") and what is a difference between the same symbol type in different fonts. Depending on how finely we individuate typefaces (two printouts of the same document on the same printer never look exactly alike), this task may involve specifying a text recognition algorithm. This is tough stuff. So there is good pragmatic reason to sweep all this stuff under the logician's carpet as merely notational differences.

Or one could go in a different direction. One could, for instance, identify the differences between sentences (or, more generally, wffs) that are tautologically equivalent as merely notational differences. Then, "P or Q" and "Q or P or Q" will be the same sentence type. Why not do that? One might respond: "Well, it's possible to believe that P or Q without believing that Q or P or Q. So we better not think of the differences as merely notational." However, imagine Pierre. He has heard me say that London is pretty and the Queen saying that London is ugly. But he has failed to recognize behind the difference in accents that my token of "London" and the Queen's token of it both name the same city. If we were to express Pierre's beliefs, it would be natural to say "Pierre believes that [switch to Pruss's accent] London [switch back] is pretty and that [switch to Her Majesty's accent] London [switch back] is ugly." So the belief argument against identifying "P or Q" with "Q or P or Q" pushes one in the direction of the previous road—that of differentiating very finely.

On this approach, propositional logic becomes really easy. You just need conjunction-introduction and disjunction-introduction.

Or one could do the following: Consider tokens of "now" to be of different word types (the comments on the arbitrariness of sentence types apply to word types) when they are uttered at different times. Then, tokens of "now" are no longer indexicals. Doing it this way, we remove all indexicality from our language. Which is nice!

Or one can consider "minor variations". For instance, logic textbooks often do not give parenthesis introduction and elimination rules, as well as rules on handling spaces in sentences. As a result, a good deal of the handling of parentheses and spaces is left for merely notational equivalence to take care of. It's easy to vary how one handles a language in these ways.

There does not seem to be any objective answer for any language where exactly merely notational differences leave off. There seem to be some non-pragmatic lines one can draw. We do not want sentence types to be so broad that two tokens of the same non-paradoxical and non-indexical type can have different truth values. Nor, perhaps, do we want to identify sentence tokens as being of the same type just because they are broadly logically equivalent when the equivalence cannot be proved algorithmically. (Problem: Can the equivalences between tokens in different fonts and accents be proved algorithmically? Can one even in principle have a perfect text scanning and speech recognition algorithm?) But even if we put in these constraints this still leaves a lot of flexibility. We could identify all tautologously equivalent sentences as of the same type. We could even identify all first order equivalent sentences as of the same type.

Here is a different way of seeing the issue, developed from an idea emailed to me by Heath White. A standard way of making a computer language compiler is to split the task up into two stages. The first stage is a "lexer" or "lexical analyzer" (often generated automatically by a tool like flex from a set of rules). This takes the input, and breaks it up into "tokens" (not in the sense in which I use the word)—minimal significant units, such as variable names, reserved keywords, numeric literals, etc. The lexical analyzer is not in general one-to-one. Thus, "f( x^12 + y)" will get mapped to the same sequence of tokens as "f(x^12+y )"—differences of spacing don't matter. The sequence of tokens may be something one can represent as FUNCTIONNAME("f") OPENPAREN VARIABLENAME("x") CARET NUMERICLITERAL(12) PLUS VARIABLENAME("y") CLOSEPAREN. After the lexical analyzer is done, the data is handed over to the parser (often generated automatically by a tool like yacc or bison from a grammar file).

Now, in practice, the hand-off between the lexer and the parser is somewhat arbitrary. If one really wanted to and was masochistic enough, one could write the whole compiler in the lexer. Or one could write a trivial parser, one that spits out each character (or even each bit!) as a separate token, and then the parser would work really hard.

Nonetheless, as Heath pointed out to me, there may be an objective answer to where notational difference leaves off. For it may be that our cognitive structure includes a well-defined lexer that takes auditory (speech), visual (writing or sign language) or tactile (Braille or sign language for the deaf-blind) observations and processes them into some kind of tokenized structure. If so, then two tokens are of the same sentence type provided that the structure would normally process them into the same structure. If so, then sentence type will in principle be a speaker-relative concept, since different people's lexers might work differently. To be honest, I doubt that it works this way in me. For instance, I strongly doubt that an inscription of "Snow is white" and an utterance of "Snow is white" give rise to any single mental structure in me. Maybe if one defines the structure in broad enough functional terms, there will be a single structure. But then we have arbitrariness as to what we consider to be functionally relevant to what.

The lesson is not that all is up for grabs. Rather, the lesson is that the distinctions between tokens and types should not be taken to be unproblematic. Moreover, the lesson supports my view—which I think is conclusively proved by paradoxical cases—that truth is a function of sentence token rather than sentence type.

Saturday, August 15, 2009

Ungrammatical sentences

I think there is something important to be learned in the philosophy of language from the fact that grammatically wrong sentences often succeed in clearly expressing propositions. (Maybe something along the lines of the claim that speaker meaning is the only meaning there is.)

Thursday, August 13, 2009

Some opinions, with little argument

Consider:

  1. The first full sentence in this post is not true.
  2. The second is true.
  3. The third does not express a proposition.
  4. The fourth is true or false.
  5. The fifth expresses a proposition.

I am quite sure that: The token (1) does not express a proposition that is true. Therefore, the first full sentence (here I am stipulatively taking "sentence" in the usual grammatical sense, rather than in a more beefy sense that I actually prefer) in this post is not true. Note that there is no contradiction between the two preceding sentences. In fact, I think the token (1) does not express a proposition

I am pretty sure that the token (2) does not express a proposition that is true. Therefore, the second full sentence in this post is not true. It does not follow from this that the token (2) expresses a proposition that is false. I suspect the token (2) does not express a proposition.

I am strongly inclined to think that the tokens (3)-(5) likewise fail to express propositions.

Now, if only I had good arguments for my judgments about (2)-(5), I'd be happy.

Thursday, July 16, 2009

Sentence tokens

It is tempting to identify sentence tokens with certain noises or inscriptions. But this is mistaken, if we want meaning and truth to be a function of the sentence token. For it is easy to imagine a case where a speaker with a single noise says two things, one a truth in language L1 and the other a falsehood in language L2, to two different interlocutors. It's kind of hard to come up with examples using actual languages, except of the one word sort. My favorite there would be pointing at a bottle and saying to two people, one a speaker only of English and the other a speaker only of German "Gift", and each ignorant of the other's presence (we can imagine them on either side of a divider). To the speaker of English, one has said that the bottle is a present; the speaker of German has been warned that it is poison. A different kind of example can be produced using ambiguity and context. If I've just been talking with Fred about rivers and with George about finances, and neither was a party to the other conversation, I can say: "I was by the bank yesterday", deliberately telling Fred that I was by the riverbank yesterday and telling George that I was visiting a financial institution. The two claims might be both true, or both false, or one true and the other false.

So if we want sentence tokens to play the role of resolving ambiguity, taking care of indexicals, etc., so that meaning and truth would be a function of the token, the tokens can't be noises and inscriptions. They could be noise (or inscription) and intention pairs, or they could be utterings (maybe in each of my above cases, I deliberately do two utterings with one same noise, just as I might do two mosquito killings with one well-placed slap), or they could be noise and understanding pairs (if we prefer to locate meaning on the side of the listener), or they could be acts of hearing.

Thursday, June 18, 2009

Understanding a sentence

If you don't like centered propositions, drop the "centered" from the following. I am using the phrase "knowledgeably understand" to boost understanding to a level that requires the kinds of justification that knowledge does. Perhaps understanding already has that built-in, in which case "knowledgeably" can be dropped.

Now, consider the following inconsistent triad, each proposition of which is defensible:

  1. To knowledgeably understand a sentence it suffices to know the language and to apply appropriate symbol recognition, symbol manipulation and logical skills to that sentence.
  2. Necessarily, someone who knowledgeably understands a sentence knows what (centered) proposition that sentence expresses or else knows that the sentence does not express a (centered) proposition.
  3. There are sentences s such that one cannot know whether s expresses a proposition simply by knowing the language, and by applying appropriate symbol recognition, symbol manipulation and logical skills to that sentence.
I think (1) and (2) are quite intuitively plausible, but (3) needs an argument. Here is a standard argument (Kripke came up with cases like this). I erase my board and write on it "No sentence on Jon's board expresses a true (centered) proposition." Let s be this sentence. Then I cannot know whether s express a (centered) proposition unless I know what Jon has on his board. For Jon, being a philosopher, might easily have written on his board "Every sentence on Alex's board expresses a true (centered) proposition." But if that is what is on his board, then the sentence on my board cannot express a (centered) proposition. (If it expresses a true (centered) proposition p, then plainly p is true if and only if p is not true.) But I cannot know what Jon has on his board simply by knowing the language and applying symbolic and logical skills to the sentence on my board. Hence, (3) is true.

Given the above really good argument for (3), we need to reject (1) or (2). I am inclined to reject (1), as (2) seems very, very plausible. Or, perhaps better yet, we might reject the notion of sentences that the paradox is predicated on.

Actually, everybody should reject (1) in the case of natural languages, simply because of the problems of homonymy, and that's not very interesting. But the argument against (1) (assuming the notion of sentences that the paradox is based on) continues to work even if we distinguish homonyms with subscripts, and similarly deal with other "standard" contextual ambiguities.

Thursday, December 18, 2008

Nonsense and externalism (Language, Part VI)

is I will assume at first a fairly standard view of language, not my own weird view.

The following two claims are very plausible:

  1. Whether a particular sequence of words from a language L expresses a proposition does not depend on anything other than facts about L.
  2. A proposition is either true or false.
But in fact, (1) and (2) are not both true. For, consider the following line of words at the top of a page:
  1. The next line of words expresses a true proposition.
Assuming a proposition is either true or false, it follows that whether (3) expresses a proposition depends on what the next line of words is. If the next line of words is "The sky is blue" or "Pigs can fly", then (3) expresses a proposition. But if the next line of words is
  1. The previous line of words does not express a true proposition,
then (3) (or more precisely, the proposition expressed by (3)) can neither be true nor false. For if it is false, then the next line expresses a truth, and hence (3) is true. And if (3) is true, then (4) is true, and (3) is false. Since a proposition is either true or false, if (3) is followed by (4), (3) does not express a proposition. Thus, whether (3) expresses a proposition depends on what the next line of words is.

Observe that the two lines of words can be written independently by two different people. Thus, whether a sequence of words uttered by me expresses a proposition can depend on what someone else says—even on what someone else says later, assuming (2).

We thus need to reject either (1) or (2) or both. In fact, I think we should reject (1). Rejecting (2) forces a non-classical logic. Call a sequence of words that does not express a proposition "nonsense". Then what we have learned is that whether a sequence of words is nonsense can depend on non-linguistic facts about the external world. Thus, just as we learned from Kripke that judging whether a proposition is possible is not in general a matter for an armchair investigator, so, too, judging whether a sequence of words is nonsense is not in general a matter for an armchair investigator.

Or at least that's what happens if one has a standard view of language. I myself have a non-standard one. On my view engaging in sentential anaphora (as in (3)) makes the anaphorically referred-to sentence be a part of one's own sentence—it is a way of taking up another's words and making them one's own. This is a version of deflationism. (By the way, I love the joke about deflationary semantics of "true". You want to be famous? You write a paper that says: "Everything Brandom says in his next paper is true." Then when Brandom publishes his paper, you say: "He's right, but I said it first.")

This all works a bit better on an eternalist theory of time.

Tuesday, December 16, 2008

Liar paradox with only quantification

The following remark is inspired by Williamson's "Everything" piece. Here is a liar paradox that uses no direct reference (as in "This sentence is false"), and indeed where the only funny business going on in it is a quantification over all sentences:

No actually tokened written sentence is true if it both ends with a decimal number which is the MD5 checksum of all of that sentence minus its last sequence of non-space symbols and if the MD5 checksum of all of that sentence minus its last sequence of non-space symbols is 187835884982830523138282294681725949791.
The paradox relies on the extremely likely claim (probability about 1−2−128, I suppose) that nobody ever tokens a different written sentence satisfying the condition after the "if". Take my word for it that the sentence above does satisfy the condition.

Note that "that sentence" is not directly referential—it is, rather, a bound variable, bound by the quantification over sentences.

What should we do? Well, I think we can should either reject quantification over sentences, or reject something like compositionality. Neither is an appealing prospect, though I've got other reasons to be suspicious of compositionality and its relatives.

If one says that one should reject quantification over sentences, but allow quantification over sentence tokens, then I'll offer the following variant:

No actually written sentence token is true if it both ends with a decimal number which is the MD5 checksum of all of that sentence token minus its last sequence of non-space symbols and if the MD5 checksum of all of that sentence token minus its last sequence of non-space symbols is 127533944667835603647534200477710876898.

This yields interesting arguments. If one allows compositionality, then one should reject quantification over all sentences or all sentence tokens. I think this forces one to be an irrealist about sentences and sentence tokens. Or one can just disallow compositionality, and thus deny that the items in block quotes are bona fide sentences, expressive of propositions.