Showing posts with label communication. Show all posts
Showing posts with label communication. Show all posts

Wednesday, March 19, 2025

Reducing promises to assertions

To promise something, I need to communicate something to you. What is that thing that I need to communicate to you? To a first approximation, what I need to communicate to you is that I am promising. But that’s circular: it says that promising is communicating that I am promising. This circularity is vicious, because it doesn’t distinguish promising from asking: asking is communicating that I am asking.

But now imagine I have a voice-controlled robot named Robby, and I have programmed him in such a way that I command him by asserting that Robby will do something because I have said he will do it. Thus, to get him to vacuum the living room, I assert “Robby will immediately vacuum the living room because I say so.” As long as what I say is within the range of Robby’s abilities, any statement I make in Robby’s vicinity about what he will do because I say he will do it is automatically true. This is all easily imaginable.

Now, back to promises. Perhaps it works like this. I have a limited power to control the normative sphere. This normative power generates an effect in normative space precisely when I communicate that I am generating that effect. Thus, I can promise to buy you lunch by asserting “I will be obligated to you to buy you lunch.” And I permit you to perform heart surgery by asserting “You will cease to have a duty of respect for my autonomy not to perform heart surgery on me.” As long as what I say is within my normative capabilities, by communicating that I am making it true by communicating it, I make it be true, just as Robby will do what I assert he will do because of my say-so, as long as it is within his physical capabilities.

This solves the circularity problem for promising because what I am communicating is not that I am promising, but the normative effect of the promising:

  1. x promises to ϕ to y if and only if x successfully exercises a communicative normative power to gain an obligation-to-y by ϕing

  2. a communicative normative power for a normative effect F is a normative power whose object is F and whose successful exercise requires the circumstance that one express that one is producing F by communicating that one is so doing.

There are probably some further tweaks to be made.

Of course, in practice, we communicate the normative effect not by describing it explicitly, but by using set phrases, contextual cues, etc.

This technique allows us to reduce promising, consenting, requesting, commanding and other illocutionary forces to normative power and communicating, which is basically a generalized version of assertion. But we cannot account for communicating or asserting in this way—if we try to do that, we do get vicious circularity.

Thursday, December 8, 2022

Utilitarianism and communication

Alice and Bob are both perfect Bayesian epistemic agents and subjectively perfect utilitarians (i.e., they always do what by their lights maximizes expected utility). Bob is going to Megara. He comes to a crossroads, from which two different paths lead to Megara. On exactly one of these paths there is a man-eating lion and on the other there is nothing special. Alice knows which path has the lion. The above is all shared knowledge for Alice and Bob.

Suppose the lion is on the left path. What should Alice do? Well, if she can, she should bring it about that Bob takes the right path, because doing so would clearly maximize utility. How can she do that? An obvious suggestion: Engage in a conventional behavior indicating a where the lion is, such as pointing left and roaring, or saying “Hail well-met traveler, lest you be eaten, I advise you to avoid the leftward leonine path.”

But I’ve been trying really hard to figure out how is it that such a conventional behavior would indicate to Bob that the lion is on the left path.

If Alice were a typical human being, she would have a habit of using established social conventions to tell the truth about things, except perhaps in exceptional cases (such as the murderer at the door), and so her use of the conventional lion-indicating behavior would correlate with the presence of lions, and would provide Bob with evidence of the presence of lions. But Alice is not a typical human being. She is a subjectively perfect utilitarian. Social convention has no normative force for Alice (or Bob, for that matter). Only utility does.

Similarly, if Bob were a typical human being, he would have a habit of forming his beliefs on the basis of testimony interpreted via established social conventions absent reason to think one is being misinformed, and so Alice’s engaging in conventional left-path lion-indicating behavior would lead Bob to think there is a lion on the left, and hence to go on the right. And while it would still be true that social convention has no normative force for Alice, Alice would think have reason to think that Bob follows convention, and for the sake of maximizing utility would suit her behavior to his. But Bob is a perfect Bayesian. He doesn’t form beliefs out of habit. He updates on evidence. And given that Alice is not a typical human being, but a subjectively perfect utilitarian, it is unclear to me why her engaging in the conventional left-path lion-indicating behavior is more evidence for the lion being on the left than for the lion being on the right. For Bob knows that convention carries no normative force for Alice.

Here is a brief way to put it. For Alice and Bob, convention carries no weight except as a predictor of the behavior of convention-bound people, i.e., people who are not subjectively perfect utilitarians. It is shared knowledge between Alice and Bob that neither is convention-bound. So convention is irrelevant to the problem at hand, the problem of getting Bob to avoid the lion. But there is no solution to the problem absent convention or some other tool unavailable to the utilitarian (a natural law theorist might claim that mimicry and pointing are natural indicators).

If the above argument is correct—and I am far from confident of that, since it makes my head spin—then we have an argument that in order for communication to be possible, at least one of the agents must be convention-bound. One way to be convention-bound is to think, in a way utilitarians don’t, that convention provides non-consequentialist reasons. Another way is to be an akratic utilitarian, addicted to following convention. Now, the possibility of communication is essential for the utility of the kinds of social animals that we are. Thus we have an argument that at least some subjective utilitarians will have to become convention-bound, either by getting themselves to believe that convention has normative force or by being akratic.

This is not a refutation of utilitarianism. Utilitarians, following Parfit, are willing to admit that there could be utility maximization reasons to cease to be utilitarian. But it is, nonetheless, really interesting if something as fundamental as communication provides such a reason.

I put this as an issue about communication. But maybe it’s really an issue about communication but coordination. Maybe the literature on repeated games might help in some way.

Monday, August 15, 2011

Implicature and lying

Philosophers say things like: "Asserting 'There is no conclusive proof that Smith is a plagiarist' implicates that there is a genuine possibility of Smith's being a plagiarist." (And yet taken literally "There is no conclusive proof that Smith is a plagiarist" is true even if no one ever suspected Smith of plagiarism.) However what one implicates not only has propositional content, but also illocutionary force, and both both the content and the force are implicated. So if we want to be more explicit, we should say something like: "Asserting 'There is no conclusive proof that Smith is a plagiarist' implicates the suggestion (or insinuation or even assertion) that there is a genuine possibility of Smith's being a plagiarist." Which of the forces--suggestion, insinuation or assertion--is the right one to choose is going to be a hard question to determine. Maybe there is vagueness (ugh!) here. In any case, we don't just implicate propositions--we implicate whole speech acts. A question can implicate an assertion and an assertion a question ("It would be really nice if you would tell me whether...").

I used to wonder whether the moral rules governing lying (which I think are very simple, namely that it is always wrong to lie, but I won't be assuming that) extend to false implicature. I now realize that the question is somewhat ill-formed. The moral rules governing lying are tied specifically to assertions, not to requests or commands. One can implicate an assertion, but one can also implicate other kinds of speech acts, and it only makes sense to ask whether the moral rules governing lying extend to false implicature when what is implicated is an assertion or assertion-like.

And I now think there is a very simple answer to the question. The moral rules governing lying do directly extend to implicated assertions. But just as these moral rules do not directly extend to other assertion-like explicit speech acts, such as musing, so too they do not directly extend to other assertion-like implicated speech acts, such as suggesting. The rules governing an implicated suggestion are different from the rules governing an explicit assertion not because the implicated suggestion is implicated, but because the implicate suggestion is a suggestion. If it were an explicit suggestion, it would be governed by the same rules.

That said, there are certain speech acts which are more commonly implicated than made explicitly--suggestion is an example--and there may even be speech acts, like insinuation (Jon Kvanvig has impressed on me the problematic nature of "I insinuate that...") that don't get to be performed explicitly (though I don't know that they can't be; even "I insinuate that..." might turn out to be a very subtle kind of insinuation in some weird context).

I think the distinction between the explicit speech act and the implicated speech act does not mark a real joint in nature. The real joint in nature is not between, say, explicit and implicated assertion, but between, say, assertion and suggestion (regardless of which, if any, is explicit or implicated). Fundamental philosophy of communication does not, I think, need the distinction between the explicit speech act and the implicated speech act. That distinction is for the linguists--it concerns the mechanics of communication (just as the distinction between written and spoken English, or between French and German) rather than its fundamental nature.

Monday, April 11, 2011

Language and telepathic communication

Imagine aliens who could telepathically induce three kinds of mental states in others, at a medium distance range: (a) states of the form it seems that x telepathically informs me that s; (b) states of the form it seems that x telepathically commits itself to me that s; and (c) states of the form it seems that x telepathically requests of me that s.  (Or maybe one can replace the seeming states with being states.)

These three kinds of induced mental states would correspond to our assertions, promises and requests, respectively.  (I didn't include questions because questions are requests for information.)

Do these aliens have language?  I think a standard answer would be negative.  Normally, language acts as an intermediary between the speaker and states of the form (a), (b) and (c) in the listener.  In the telepathic aliens, either there is no intermediary or the intermediary is merely causal--say, waves in a psi field.  So even if there is an intermediary, it lacks the conventionality, normativity and grammaticality that seem to be marks of language.  Moreover, in language we know it, it is the listener who processes the incoming utterance and turns it into a mental state like (a), (b) or (c).

For now suppose the version of the alien story that involves waves in a psi field as a causal intermediary.  I think the differences are not sufficiently significant to mark the aliens as lacking language.  Conventionality is a feature of human languages which are developed by mimetic evolution and are passed on mimetically.  But there is nothing absurd about languages that develop instead by genetic evolution.  (It may be that some animals have them.)

There is no reason why the aliens' communicative methods could not be normatively laden through and through.  They might, for instance, be subject to an Aristotelian teleology according to which one ought not telepathically induce states of the form it seems that x telepathically informs me that s unless s, or unless one believes s, or unless one knows that s, or whatever the correct norm of assertion is.

If the aliens are in a world relevantly like ours, there will be regularities about the correlations between the waves in the psi field and the resultant mental states, and these regularities may very well be isomorphic to grammatical ones.

Finally, I do not think the question whether the listeners process the inputs or not matters much.  Most of the time, our linguistic processing is automatic and unconscious.  We could imagine a person whose automatic and unconscious linguistic processing is replaced by a prosthesis that produces the relevant mental states.  We could talk to such a person and she would count as hearing us.  Furthermore, we can imagine that in the aliens there is some minor processing, such as amplifying faint psi field waves.  The presence or absence of amplification surely doesn't mark the difference between language and non-language.

But now suppose that we grant that the aliens have language.  That means that an account of what language is must apply both to the aliens and to us, and it makes the task of such an account easier, because the aliens, being merely stipulated, can easily be studied. :-)

I see, for instance, four rough but attractive accounts of assertion that work for the telepathic aliens:

  1. x asserts that s to y if and only if x intentionally brings it about that it seems to y that x informs it* that s
  2. x asserts that s to y if and only if x tries to bring it about that it seems to y that x informs it* that s
  3. x asserts that s to y if and only if x intentionally brings it about that it seems to y that x informs it* that s by means of a sufficiently properly functioning process whose telos is the bringing about of this mental state
  4. x asserts that s to y if and only if x tries to bring it about that it seems to y that x informs it* that s by means of a sufficiently properly process whose telos is the bringing about of this mental state.
(In 3 and 4, one needs to be clearer on what exactly is in the scope of the intention.)

And something like one of these might work for us, too.

Wednesday, January 26, 2011

Epistemic self-sacrifice and prisoner's dilemma

In the fall, I attended a really neat talk by Patrick Grim which reported on several computer simulation experiments by Grim. Suppose you have a bunch of investigators who are each trying to find the maximum ("the solution to the problem") of some function. They search, but they also talk to one other. When someone they are in communication with finds a better option than their own, they have a certain probability of switching to that. The question is: How much communication should there be between investigators if we want the community as a whole to do well vis-a-vis the maximization problem?

Consider two models. On the Local Model (my terminology), the investigators are arranged around the circumference of a circle, and each talks only to her immediate neighbors. On the Internet Model (also my tendentious terminology), every investigator is in communication with every investigator. So, here's what you get. On both models, the investigators eventually communally converge on a solution. On the Internet Model, community opinion converges much faster than on the Local Model. But on the Internet Model the solution converged on is much more likely to be wrong (to be a local maximum rather than the global maximum).

So, here is a conclusion one might draw (which may not be the same as Grim's conclusion): If the task is satisficing or time is of the essence, the Internet Model may be better—we may need to get a decent working answer quickly for practical purposes, even if it's not the true one. But if the task is getting the true solution, it seems the Local Model is a better model for the community to adopt.

Suppose we're dealing with a problem where we really want the true solution, not solutions that are "good enough". This is more likely in more theoretical intellectual enterprises. Then the Local Model is epistemically better for the community. But what is epistemically better for the individual investigator?

Suppose that we have a certain hybrid of the Internet and Local Models. As in the Local Model, the investigators are arranged on a circle. Each investigator knows what every other investigator is up to. But the investigator has a bias in favor of her two neighbors over other investigators. Thus, she is more likely to switch her opinion to match that of her neighbors than to match that of the distant ones. There are two limiting cases: in one limiting case, the bias goes to zero, and we have the Internet Model. In the other limiting case, although she knows of the opinions of investigators who aren't her neighbors, she ignores it, and will never switch to it. This is the Parochial Model. The Parochial Model gives exactly the same predictions as the Local Model.

Thus, investigators' having an epistemic bias in favor of their neighbors can be good for the community. But such a bias can be bad for the individual investigator. Jane would be better off epistemically if she adopted the best solution currently available in the community. But if everybody always did that, then the community would be worse off epistemically with respect to eventually getting at the truth, since then we would have the Internet Model.

This suggests that we might well have the structure of a Prisoner's Dilemma. Everybody is better off epistemically if everybody has biases in favor of the local (and it need not be spatially local), but any individual would be better off defecting in favor of the best solution currently available. This suggests that epistemic self-sacrifice is called for by communal investigation: people ought not all adopt the best available solution—we need eccentrics investigating odd corners of the solution space, because the true solution may be there.

Of course, one could solve the problem like this. One keeps track of two solutions. One solution is the one that one comes to using the biased method and the other is the best one the community has so far. The one that one comes to using the biased method is the one that one's publications are based on. The best one the community has so far is the one that one's own personal opinion is tied to. The problem with this is that this kind of "double think" may be psychologically unworkable. It may be that investigation only works well when one is committed to one's solution.

If this double think doesn't work, this suggests that in some cases individual and group rationality could come apart. It is individually irrational to be intellectually eccentric, but good for the community that there be intellectual eccentrics.

My own pull is different in this case than in the classic non-epistemic Prisoner's Dilemma. In this case, I think one should individually go for individual rationality. One should not sacrifice oneself epistemically here by adopting biases. But in the classic Prisoner's Dilemma, one has very good reason to sacrifice oneself.

Wednesday, August 25, 2010

Small talk

I've always found small talk challenging. Talking about anything other than technical or academic subjects is hard. I think I've just figured out small talk, though. The primary aim of small talk is not the communication of propositions. Rather, small talk for humans is like grooming for less verbal primates.