Showing posts with label knowledge. Show all posts
Showing posts with label knowledge. Show all posts

Tuesday, December 2, 2025

Omniscience and vagueness

Suppose there is metaphysical vagueness, say that it’s metaphysically vague whether Bob is bald. God cannot believe that Bob is bald, since then Bob is bald. God cannot believe that Bob is not bald, since then Bob is not bald. Does God simply suspend judgment?

Here is a neat solution for the classical theist. Classical theists believe in divine simplicity. Divine simplicity requires an extrinsic constitution model of divine belief or knowledge in the case of contingent things. Suppose a belief version. Then, plausibly, God’s beliefs about contingent things are partly constituted by the realities they are about. Hence, it is plausible that when a reality is vague, it is vague whether God believes in this reality.

Here is another solution. If we think of belief as taking-as-true and disbelief as taking-as-false, we should suppose a third state of taking-as-vague. Then we say that for every proposition, God has a belief, disbelief or third state, as the case might be.

Tuesday, November 4, 2025

Towards quantifying the good of success

Yesterday, I argued that the good of success contributes to one’s well-being at the time of one’s striving for success rather than at the time of the success itself.

It seems, then, that the longer you are striving, the longer the amount of time that you are having the good of success. Is that right?

We do think that way. You work on a book for five years. Success is sweeter than if you work on a book for one year.

But only other things being equal. It’s not really the length of time by itself. It’s something like your total personal investment in the project, to which time is only one contribution. Gently churning butter for an hour while multitasking other things (using a pedal-powered churn, for instance) does not get you more good of success than churning butter with maximum effort for fifteen minutes, if the outputs are the same.

We might imagine—I am not sure this is right—that the good of success is variably spread out over the time of striving in proportion to the degree of striving at any given time.

What else goes into the value of success besides total personal investment? Another ingredient is the actual value of the product. If you’ve decided to count the hairs on your toes, success is worth very little. Furthermore, the actual value of the product needs to be reduced in proportion to the degree to which you contributed.

Thus, if Alice and Bob both churned butter and produced n pounds, the value of the output is something like bn, where b is the value of butter per pound. If the investments put in by Alice and Bob are IA and IB, then Alice’s share of the value is bnIA/(IA+IB). But since the value of success is proportional also to the absolute investment, I think that the considerations given thus far yield a formula for the value of success for Alice proportional to:

  • bnIA2/(IA+IB).

Next note that one way to think about the degree to which you contributed is to think as above—what fraction of the total investment is yours. However, even if you are the only person working on the project, the degree of your contribution may be low. Let’s say that you have moved into a house with a mint bush. Mint bushes are aggressive. They grow well with little care (or so we’ve found). But you do water it. The mint bush added half a pound to its weight at the end of the season. You don’t, however, get credit for all of that pound, since even if you hadn’t watered it, it would likely have grown, just not as much. So you only get credit for the portion of the output that is “yours”. Moreover, sometimes things work probabilistically. If the success is mostly a matter of chance given your investment, I think you only get good-of-success credit in proportion to the chance of success—but I am not completely sure of this.

But here is something that makes me a little uncertain of the above reasoning. Suppose that you have some process where the output is linearly dependent on the investment of effort. You invest I, and you get something of value cI for some proportionality constant c. By the above account, to get the value of success, you should multiply this by I again, since the value of success is proportional to both the value of the output and the effort put in. Thus, you get cI2. But is it really the case that when you double the effort you quadruple the value of the success? Maybe. That would be interesting! Or are we double-counting I?

Another question. When we talk about the value of the output, is that the objective value, or the value you put on it, or some combination of the two? Counting the hairs on your toes has little objective value, but what if you think it has significant value? Doesn’t success then have significant value? I suspect not.

But what about activities where the value comes only from your pursuit, such as when you try to win at solitaire or run a mile as fast as you can? In those cases it’s harder to separate the value of the output from the value you put on it. My guess is that in those cases there is still an objective value of the output, but this objective value is imposed by your exercise of normative power—by pursuing certain kinds of goals we can make the goal have value.

Let’s come back to counting hairs on toes. If you’re doing it solely for the sake of the value of knowledge, this has (in typical circumstances) little objective value. But if your hobby is counting difficult to count things, then maybe there is additional value, beyond that of trivial knowledge, in the result.

I suspect there are further complications. Human normativity is messy.

And don’t ask me how this applies to God. On the one hand, it takes no effort for God to produce any effect. On the other hand, by divine simplicity God is perfectly invested in everything he does. But since my metaethics is kind-relative, I am happy with the idea that this will go very differently for God than for us.

Monday, November 3, 2025

Aquinas on God's knowledge of propositions

Does God know that the sky is blue?

That seems like a silly question. It’s not like we’re asking whether God knows future contingents, or counterfactuals of freedom. That the sky is blue is something that it is utterly unproblematic for God to know.

Except that it is tempting to say that God has no propositional knowledge, and knowing that the sky is blue is knowing a proposition.

It seems that Aquinas answers the question in Summa Theologiae I.14.14: “God knows all the propositions that can be formulated” (that’s in Freddoso’s translation; the older Dominican translation talks of “enunciable things”, but I think that doesn’t affect what I am going to say). It seems that God does have propositional knowledge, albeit not in the divided or successive way that we do.

But what he is up to in I.14.14 is not what it initially sounds like to the analytic philosopher’s ear.

For consider Thomas’s argument in I.14.14 that God knows all formulable propositions:

Since (a) to formulate propositions lies within the power of our intellect, and since (b), as was explained above (a. 9), God knows whatever lies within either His own power or the power of a creature, it must be the case that God knows all the propositions that can be formulated.

But now notice an ambiguity in “God knows the proposition that the sky is blue.” In one sense, which I will call “alethic”, this just means God knows that the sky is blue. In another sense, the “objectual”, it means that God knows a certain abstract object, the proposition that the sky is blue. In the objectual sense, God also knows the proposition that the sky is green—God fully knows that proposition, just as he knows other objects, like the person Socrates. But God does not, of course, have the alethic knowledge here—God does not know that the sky is green, because the sky is not green.

If it was the alethic sense that Thomas was after, his argument would be invalid. For in article 9, the discussion clearly concerns objectual knowledge. Exactly the same argument establishes that God knows the proposition that the sky is green as that he knows the proposition that the sky is blue. Furthermore, the Biblical quote Thomas gives in support of his view is “The Lord knows the thoughts of men” (Psalm 93:11). But the Lord doesn’t know all of them to be true, doesn’t know all of them alethically, because not all of the thoughts of humans are true.

Furthermore, if it was alethic knowledge that Aquinas was after, it would be inaccurate to say God knows all propositions. For only “half” of the propositions can be known alethically—the true ones!

All that said, I think we can still bootstrap from the objectual to the alethic knowledge. God’s knowledge of objects is perfect (Aquinas relies on this perfection multiple times in Question 14) and hence complete. If God knows something, God also knows all of its properties, intrinsic and relational. Thus, if God knows a proposition objectually, and that proposition has a truth value, God knows that truth value. In particular, if that proposition is true, God knows that it is true. And that seems to suffice for counting as knowing the proposition alethically.

So, it looks like Aquinas is committed to God objectually knowing both the propositions that the sky is green and that the sky is blue, and also knowing that the former is false and the latter is true—which seems to be enough for God to count as knowing that the sky is blue. (Though I could see this last point getting questioned.)

Wednesday, July 9, 2025

Acting without knowledge of rightness

Some philosophers think that for your right action to be morally worthy you have to know that the action is right.

On the contrary, there are cases where an action is even more morally worthy when you don’t know it’s right.

  1. Alice is tasked with a dangerous mission to rescue hikers stranded on a mountain. She knows it’s right, and she fulfills the mission.

  2. Bob is tasked with a dangerous mission to rescue hikers stranded on a mountain. He knows it’s right, but then just before he heads out, a clever philosopher gives him a powerful argument that there is no right or wrong. He is not fully convinced, but he has no time to figure out whether the argument works before the mission starts. Instead, he reasons quickly: “Well, there is a 50% chance that the argument is sound and there is no such thing as right and wrong, in which case at least I’m not doing anything wrong by rescuing. But there is a 50% chance that there is such a thing as right and wrong, and if anything is right, it’s rescuing these hikers.” And he fulfills the mission.

Bob’s action is, I think, even more worthy and praiseworthy than Alice’s. For while Alice risks her life for a certainty of doing the right thing, Bob is willing to risk his life in the face of uncertainty. Some people would take the uncertainty as an excuse, but Bob does not.

Tuesday, February 25, 2025

Being known

The obvious analysis of “p is known” is:

  1. There is someone who knows p.

But this obvious analysis doesn’t seem correct, or at least there is an interesting use of “is known” that doesn’t fit (1). Imagine a mathematics paper that says: “The necessary and sufficient conditions for q are known (Smith, 1967).” But what if the conditions are long and complicated, so that no one can keep them all in mind? What if no one who read Smith’s 1967 paper remembers all the conditions? Then no one knows the conditions, even though it is still true that the conditions “are known”.

Thus, (1) is not necessary for a proposition to be known. Nor is this a rare case. I expect that more than half of the mathematics articles from half a century ago contain some theorem or at least lemma that is known but which no one knows any more.

I suspect that (1) is not sufficient either. Suppose Alice is dying of thirst on a desert island. Someone, namely Alice, knows that she is dying of thirst, but it doesn’t seem right to say that it is known that she is dying of thirst.

So if it is neither necessary nor sufficient for p to be known that someone knows p, what does it mean to say that p is known? Roughly, I think, it has something to do with accessibility. Very roughly:

  1. Somebody has known p, and the knowledge is accessible to anyone who has appropriate skill and time.

It’s really hard to specify the appropriateness condition, however.

Does all this matter?

I suspect so. There is a value to something being known. When we talk of scientists advancing “human knowledge”, it is something like this “being known” that we are talking about.

Imagine that a scientist discovers p. She presents p at a conference where 20 experts learn p from her. Then she publishes it in a journal when 100 more people learn it. Then a Youtuber picks it up and now a million people know it.

If we understand the value of knowledge as something like the sum of epistemic utilities across humankind, then the successive increments in value go like this: first, we have a move from zero to some positive value V when the scientist discovers p. Then at the conference, the value jumps from V to 21V. Then after publication it goes from 21V to 121V. Then given Youtube, it goes from 121V to 100121V. The jump at initial discovery is by far the smallest, and the biggest leap is when the discovery is publicized. This strikes me as wrong. The big leap in value is when p becomes known, which either happens when the scientist discovers it or when it is presented at the conference. The rest is valuable, but not so big in terms of the value of “human knowledge”.

Tuesday, January 21, 2025

Competent language use without knowledge

I can competently use a word without knowing what the word means. Just imagine some Gettier case, such as that my English teacher tried to teach me a falsehood about what “lynx” means, but due to themselves misremembering what the word means, they taught me the correct meaning. Justified true belief is clearly enough for competent use.

But if I then use “lynx”, even though I don’t know what the word means, I do know what I mean by it. Could one manufacture a case where I competently use a word but don’t even know what I mean by it?

Maybe. Suppose I am a student and a philosopher professor convinces me that I am so confused that don’t know what I mean when I use the word “supervenience” in a paper. I stop using the word. But then someone comments on an old online post of mine from the same period as the paper, in which post I used “supervenience”. The commenter praises how insightfully I have grasped the essence of the concept. This someone uses a false name, that of an eminent philosopher. I come to believe on the supposed authority of this person that I meant by “supervenience” what I in fact did mean by it, and I resume using it. But the authority is false. It seems that now I am using the word without knowing what I mean by it. And I could be entirely competent.

Monday, January 20, 2025

Open-mindedness and epistemic thresholds

Fix a proposition p, and let T(r) and F(r) be the utilities of assigning credence r to p when p is true and false, respectively. The utilities here might be epistemic or of some other sort, like prudential, overall human, etc. We can call the pair T and F the score for p.

Say that the score T and F is open-minded provided that expected utility calculations based on T and F can never require you to ignore evidence, assuming that evidence is updated on in a Bayesian way. Assuming the technical condition that there is another logically independent event (else it doesn’t make sense to talk about updating on evidence), this turns out to be equivalent to saying that the function G(r) = rT(r) + (1−r)F(r) is convex. The function G(r) represents your expected value for your utility when your credence is r.

If G is a convex function, then it is continuous on the open interval (0,1). This implies that if one of the functions T or F has a discontinuity somewhere in (0,1), then the other function has a discontinuity at the same location. In particular, the points I made in yesterday’s post about the value of knowledge and anti-knowledge carry through for open-minded and not just proper scoring rules, assuming our technical condition.

Moreover, we can quantify this discontinuity. Given open-mindedness and our technical condiiton, if T has a jump of size δ at credence r (e.g., in the sense that the one-sided limits exist and differ by y), then F has a jump of size rδ/(1−r) at the same point. In particular, if r > 1/2, then if T has a jump of a given size at r, F has a larger jump at r.

I think this gives one some reason to deny that there are epistemically important thresholds strictly between 1/2 and 1, such as the threshold between non-belief and belief, or between non-knowledge and knowledge, even if the location of the thresholds depends on the proposition in question. For if there are such thresholds, then now imagine cases of propositions p with the property that it is very important to reach a threshold if p is true while one’s credence matters very little if p is false. In such a case, T will have a larger jump at the threshold than F, and so we will have a violation of open-mindedness.

Here are three examples of such propositions:

  • There are objective norms

  • God exists

  • I am not a Boltzmann brain.

There are two directions to move from here. The first is to conclude that because open-mindedness is so plausible, we should deny that there are epistemically important thresholds. The second is to say that in the case of such special propositions, open-mindedness is not a requirement.

I wondered initially whether a similar argument doesn’t apply in the absence of discontinuities. Could one have T and F be openminded even though T continuously increases a lot faster than F decreases? The answer is positive. For instance the pair T(r) = e10r and F(r) =  − r is open-minded (though not proper), even though T increases a lot faster than F decreases. (Of course, there are other things to be said against this pair. If that pair is your utility, and you find yourself with credence 1/2, you will increase your expected utility by switching your credence to 1 without any evidence.)

Friday, January 17, 2025

Knowledge and anti-knowledge

Suppose knowledge has a non-infinitesimal value. Now imagine that you continuously gain evidence for some true proposition p, until your evidence is sufficient for knowledge. If you’re rational, your credence will rise continuously with the evidence. But if knowledge has a non-infinitesimal value, your epistemic utility with respect to p will have a discontinuous jump precisely when you attain knowledge. Further, I will assume that the transition to knowledge happens at a credence strictly bigger than 1/2 (that’s obvious) and strictly less than 1 (Descartes will dispute this).

But this leads to an interesting and slightly implausible consequence. Let T(r) be the epistemic utility of assigning evidence-based credence r to p when p is true, and let F(r) be the epistemic utility of assigning evidence-based credence r to p when p is false. Plausibly, T is a strictly increasing function (being more confident in a truth is good) and F is a strictly decreasing function (being more confident in a falsehood is bad). Furthermore, the pair T and F plausibly yields a proper scoring rule: whatever one’s credence, one doesn’t have an expectation that some other credence would be epistemically better.

It is not difficult to see that these constraints imply that if T has a discontinuity at some point 1/2 < rK < 1, so does F. The discontinuity in F implies that as we become more and more confident in the falsehood p, suddenly we have a discontinuous downward jump in utility. That jump occurs precisely at rK, namely when we gain what we might call “anti-knowledge”: when one’s evidence for a falsehood becomes so strong that it would constitute knowledge if the proposition were true.

Now, there potentially are some points where we might plausibly think that epistemic utility of having a credence in a falsehood takes a discontinuous downward jump. These points are:

  • 1, where we become certain of the falsehood

  • rB, the threshold of belief, where the credence becomes so high that we count as believing the falsehood

  • 1/2, where we start to become more confident in the falsehood p than the truth not-p

  • 1 − rB, where we stop believing not-p, and

  • 0, where the falsehood p becomes an epistemic possibility.

But presumably rK is strictly between rB and 1, and hence rK is no one of these points. Is it plausible to think that there is a discontinuous downward jump in epistemic utility when we achieve anti-knowledge by crossing the threshold rK in a falsehood.

I am incline to say not. But that forces me to say that there is no discontinuous upward jump in epistemic utility once we gain knowledge.

On the other hand, one might think that the worst kind of ignorance is when you’re wrong but you think you have knowledge, and that’s kind of like the anti-knowledge point.

Wednesday, July 24, 2024

Knowing what it's like to see green

You know what it’s like to see green. Close your eyes. Do you still know what it’s like to see green?

I think so.

Maybe you got lucky and saw some green patches while closing your eyes. But I am not assuming that happened. Even if you saw no green patches, you knew what it is like to see green.

Philosophers who are really taken with qualia sometimes say that:

  1. Our knowledge of what it is like to see green could only be conferred on me by having an experience of green.

But if I have the knowledge of what it is like to see green when I am not experiencing green, then that can’t be right. For whatever state I am in when not experiencing green but knowing what it’s like to see green is a state that God could gift me with without ever giving me an experience of green. (One might worry that then it wouldn’t be knowledge, but something like true belief. But God could testify to the accuracy of my state, and that would make it knowledge.)

Perhaps, however, we can say this. When your eyes are closed and you see no green patches, you know what it’s like to see green in virtue of having the ability to visualize green, an ability that generates experiences of green. If so, we might weaken (1) to:

  1. Our knowledge of what it is like to see green could only be conferred on me by having an experience of green or an ability to generate such an experience at will by visual imagination.

We still have a conceptual connection between knowledge of the qualia and experience of the qualia then.

But I think (2) is still questionable. First, it seems to equivocate on “knowledge”. Knowledge grounded in abilities seems to be knowledge-how, and that’s not what the advocates of qualia are talking about.

Second, suppose you’ve grown up never seeing green. And then God gives you an ability to generate an experience of green at will by visual imagination: if you “squint your imagination” thus-and-so, you will see a green patch. But you’ve never so squinted yet. It seems odd to say you know what it’s like to see green.

Third, our powers of visual imagination vary significantly. Surely I know what it’s like to see paradigm instances of green, say the green of a lawn in an area what water is plentiful. If I try to imagine a green patch, if I get lucky, my mind’s eye presents to me a patch of something dim, muddy and greenish, or maybe a lime green flash. I can’t imagine a paradigm instance of green. And yet surely, I know what it’s like to see paradigm instances of green. It seems implausible to think that when my eyes are closed my knowledge of what it’s like to see green (and even paradigm green) is grounded in my ability to visualize these dim non-paradigm instances.

It seems to me that what the qualia fanatic should say is that:

  1. We only know what it’s like to see green when we are experiencing green.

But I think that weakens arguments from qualia against materialism because (3) is more than a little counterintuitive.

Wednesday, March 27, 2024

Knowledge of qualia

Suppose epiphenomenalism is true about qualia, so qualia are nonphysical properties that have no causal impact on anything. Let w0 be the actual world and let w1 be a world which is exactly like the actual world, except that (a) there are no qualia (so it’s a zombie world) and (b) instead of qualia, there are causally inefficacious nonphysical properties that have a logical structure isomorphic to the qualia of our world, and that occur in the corresponding places in the spatiotemporal and causal nexuses. Call these properties “epis”.

The following seems pretty obvious to me:

  1. In w1, nobody knows about the epis.

But the relationship of our beliefs about qualia to the qualia themselves seems to be exactly like the relationship of the denizens of w1 to the epis. In particular, neither are any of their beliefs caused by the obtaining of epis, nor are any of our beliefs caused by the obtaining of qualia, since both are epiphenomenal. So, plausibly:

  1. If in w1, nobody knows about the epis, then in w0, nobody knows about the qualia.

Conclusion:

  1. Nobody knows about the qualia.

But of course we do! So epiphenomenalism is false.

Friday, February 23, 2024

Teaching virtue

A famous Socratic question is whether virtue can be taught. This argument may seem to settle the question:

  1. If vice can be taught, virtue can be taught.

  2. Vice can be taught. (Clear empirical fact!)

  3. So, virtue can be taught.

Well, except that what I labeled as a clear empirical fact is not something that Socrates would accept. I think Socrates reads “to teach” as a success verb, with a necessary condition for teaching being the conveyance of knowledge. In other words, it’s not possible to teach falsehood, since knowledge is always of the truth, and presumably in “teaching” vice one is “teaching” falsehoods such as that greed is good.

That said, if we understand “to teach” in a less Socratic way, as didactic conveyance of views, skills and behavioral traits, then (2) is a clear empirical fact, and (1) is plausible, and hence (3) is plausible.

That said, it would not be surprising if it were harder to teach virtue even in this non-Socratic sense than it is to teach vice. After all, it is surely harder to teach someone to swim well than to swim badly.

Wednesday, October 18, 2023

Losing track of time

Suppose that the full saturated truthbearers are tensed propositions (which I think is essentially what the A-theory of time comes to). Now consider an atom with a half-life of a week. I observe the atom exactly at noon on Monday, and it hasn’t decayed yet. I thereby acquire the belief that the atom has not decayed yet. Now suppose that for the next week I stop changing in any relevant respect, and maintain belief in the same truthbearers, and the atom doesn’t decay. In particular I continue to have the tensed belief that the atom hasn’t decayed yet.

But an odd thing happens. While my belief is reliable enough for knowledge initially—it has a probability 0.9999 of remaining true for the first minute after observation—eventually the reliability goes down. After a day, the probability of truth is down to 0.91, after two days it’s 0.82, and after a week, of course, it’s 0.5. So gradually I lose reliability, and (assuming I had it) knowledge, even though nothing relevant has changed in the world in me or around me.

Well, that’s not quite true. For something seems to have changed: my observation has “gotten older”. But it’s still kind of odd—the time slice of the world is relevantly the same right after the observation as a week after.

Thursday, April 13, 2023

Barn facades and random numbers

Suppose we have a long street with building slots officially numbered 0-999, but with the numbers not posted. At numbers 990–994 and 996–999, we have barn facades with no barn behind them. At all the other numbers, we have normal barns. You know all these facts.

I will assume that the barns are sufficiently widely spaced that you can’t tell by looking around where you are on the street.

Suppose if you find yourself at #5 and judge you are in front of a barn. Intuitively, you know you are in front of a barn. But if you find yourself at #995 and judge you are in front of a barn, you are right, but you don’t know it, as you are surrounded by mere barn facades nearby.

At least that’s the initial intuition (it’s a “safety” intuition in epistemology parlance). But note first that this intuition is based on an unstated assumption, that the buildings are numbered in order. Suppose, instead, that the building numbers were allocated by someone suffering from a numeral reversal disorder, so that, from east to west, the slots are:

  • 000, 100, 200, …, 900, 010, 110, 210, …, 999.

Then when you are at #995, your immediate neighborhood looks like:

  • 595, 695, 795, 895, 995, 006, 106, 206, 306.

And all these are perfectly normal barns. So it seems you know.

But why should knowledge depend on geometry? Why should it matter whether the numerals are apportioned east to west in standard order, or in the order going with the least-significant-digit-first reinterpretation?

Perhaps the intuition here is that when you are at a given number, you could “easily have been” a few buildings to the east or to the west, while it would have been “harder” for you to have been at one of the further away numbers. Thus, it matters whether you are geometrically surrounded by mere barn facades or not.

Let’s assume from now on that the buildings are arranged east to west in standard order: 000, 001, 002, …, 999, and you are at #995.

But how did you get there? Here is one possibility. A random number was uniformly chosen between 0 and 999, hidden from you, and you were randomly teleported to that number. In this case, is there a sense in which it was “easy” for you to have been assigned a neighboring number (say, #994)? That depends on details of the random selection. Here are four cases:

  1. A spinner with a thousand slots was spun.

  2. A ten-sided die (sides numbered 0-9) was rolled thrice, generating digits the digits in order from left to right.

  3. The same as the previous, except the digits were generated in order from right to left.

  4. A computer picked the random number by first accessing a source of randomness, such as the time, to the millisecond, at which the program was started (or timings of keystrokes or fine details of mouse movements). Then a mathematical transformation was applied to the initial random number, to generate a sequence of cryptographically secure pseudorandom numbers whose relationship to the initial source of randomness is quite complex, eventually yielding the selected number. The mathematical transformations are so designed that one cannot assume that when the inputs are close to each other, the outputs are as well.

In case 1, it is intuitively true that if you landed at #995, you could “easily have been” at 994 or 996, since a small perturbation in the input conditions (starting position of spinner and force applied) would have resulted in a small change in the output.

In case 2, you could “easily have been” at 990-994 or 996-999 instead of 995, since all of these would have simply required the last die roll to have been different. In case 3, it is tempting to say that you could easily have been at these neighboring numbers since that would have simply required the first die roll to have been different. But actually I think cases 2 and 3 are further apart than they initially seem. If the first die roll came out differently, likely rolls two and three would have been different as well. Why? Well, die rolls are sensitive to initial conditions (the height from which the die is dropped, the force with which it is thrown, the spin imparted, the initial position, etc.) If the initial conditions for the first roll were different for some reason, it is very likely that this would have disturbed the initial conditions for the second roll. And getting a different result for the first roll would have affected the roller’s psychological state, and that psychological state feeds in a complex way into the way they will do the second and third rolls. So in case 3, I don’t think we can say that you could “easily” have ended up at a neighboring number. That would have required the first die roll to be different, and then, likely, you would have ended up quite far off.

Finally, in case 4, a good pseudorandom number generator is so designed that the relationship between the initial source of randomness and the outputs is sufficiently messy that a slight change in the inputs is apt to lead to a large change in the outputs, so it is false that you could easily have ended up at a neighboring number—intuitively, had things been different, you wouldn’t have been any more likely to end up at 994 or 996 than at 123 or 378.

I think at this point we can’t hold on to the initial intuition that at #995 you don’t know you’re at a barn but at #5 you would have known without further qualifications about how you ended up where you are. Maybe if you ended up at #995 via the spinner and the left-to-right die rolls, you don’t know, but if you ended up there via the right-to-left die rolls or the cryptographically secure pseudorandom number generator, then there is no relevant difference between #995 and #5.

At this point, I think, the initial intuition should start getting destabilized. There is something rather counterintuitive about the idea that the details of the random number generation matter. Does it really matter for knowledge whether the buildin number you were transported to was generated right-to-left or left-to-right by die rolls?

Why not just say that you know in all the cases? In all the cases, you engage in simple statistical reasoning: of the 1000 barn facades, 999 of them are fronts of a real barns, one is a mere facade, and it’s random which one is in front of you, so it is reasonable to think that you are in front of a real barn. Why should the neighboring buildings matter at all?

Perhaps it is this. In your reasoning, you are assuming you’re not in the 990-999 neighborhood. For if you realized you were in that neighborhood, you wouldn’t conclude you’re in front of a barn. But this response seems off-base for two reasons. First, by the same token you could say that when you are at #5, you are assuming you’re not in front of any of the buildings from the following set: {990, 991, 992, 993, 994, 5, 996, 997, 998, 999}. For if you realized you were in front of a building from that set, you wouldn’t have thought you are in front of a barn. But that’s silly. Second, you aren’t assuming that you’re not in the 990-999 neighborhood. For if you were assuming that, then your confidence that you’re in front of a real barn would have been the same as your confidence that you’re not in the 990-999 neighborhood, namely 0.990. But in fact, your confidence that you’re in front of a real barn is slightly higher than that, it is 0.991. For your confidence that you’re in front of a real barn takes into account the possibility that you are at #995, and hence that you are in the 990-999 neighborhood.

Thursday, February 23, 2023

Morality and the gods

In the Meno, we get a solution to the puzzle of why it is that virtue does not seem, as an empirical matter of fact, to be teachable. The solution is that instead of involving knowledge, virtue involves true belief, and true belief is not teachable in the way knowledge is.

The distinction between knowledge and true belief seems to be that knowledge is true opinion made firm by explanatory account (aitias logismoi, 98a).

This may seem to the modern philosophical reader to confuse explanation and justification. It is justification, not explanation, that is needed for knowledge. One can know that sunflowers turn to the sun without anyone knowing why or how they do so. But what Plato seems to be after here is not merely justified true belief, but something like the scientia of the Aristotelians, an explanatorily structured understanding.

But not every area seems like the case of sunflowers. There would be something very odd in a tribe knowing Fermat’s Last Theorem to be true, but without anybody in the tribe, or anybody in contact with the tribe, having anything like an explanation or proof. Mathematical knowledge of non-axiomatic claims typically involves something explanation-like: a derivation from first principles. We can, of course, rely on an expert, but eventually we must come to something proof-like.

I think ethics is in a way similar. There is something very odd about having justified true belief—knowledge in the modern sense—of ethical truths but not knowing why they are true. Yet it seems humans are often in this position. They know the ethical truths but not why they are true. Yet they have correct, and maybe even justified, moral judgments about many things. What explains this?

Socrates’ answer in the Meno is that it is the gods. The gods instill true moral opinion in people (especially the poets).

This is not a bad answer.

Thursday, January 26, 2023

A cure for some cases of TMI

Sometimes we know things we wish we didn’t. In some cases, without any brainwashing, forgetting or other irrational processes, there is a fairly reliable way to make that wish come true.

Suppose that a necessary condition for knowing is that my evidence yields a credence of 0.9900, and that I know p with evidence yielding a credence of 0.9910. Then here is how I can rid myself of the knowledge fairly reliably. I find someone completely trustworthy who would know for sure whether p is true, and I pay them to do the following:

  1. Toss three fair coins.

  2. Inform me whether the following conjunction is true: all coins landed heads and p is true.

Then at least 7/8 of the time, they will inform me that the conjunction is false. That’s a little bit of evidence against p. I do a Bayesian update on this evidence, and my posterior credence will be 0.9897, which is not enough for knowledge. Thus, with at least 7/8 reliability, I can lose my knowledge.

This method only works if my credence is slightly above what’s needed for knowledge. If what’s needed for knowledge is 0.990, then as soon as my credence rises to 0.995, there is no rational method with reliability better than 1/2 for making me lose the credence needed for knowledge (this follows from Proposition 1 here). So if you find yourself coming to know something that you don’t want to know, you should act fast, or you’ll have so much credence you will be beyond rational help. :-)

More seriously, we think of knowledge as something stable. But since evidence comes in degrees, there have got to be cases of knowledge that are quite unstable—cases where one “just barely knows”. It makes sense to think that if knowledge has some special value, these cases have rather less of it. Maybe it’s because knowledge comes in degrees, and these cases have less knowledge.

Or maybe we should just get rid of the concept of knowledge and theorize in terms of credence, justification and truth.

Wednesday, January 25, 2023

The special value of knowledge

Suppose there is a distinctive and significant value to knowledge. What I mean by that is that if two epistemic are very similar in terms of truth, the level and type of justification, the subject matter and its relevant to life, the degree of belief, etc., but one is knowledge and the other is not, then the one that is knowledge has a significantly higher value because it is knowledge.

Plausibly, then, if we imagine Alice has some evidence for a truth p that is insufficient for knowledge, and slowly and continuously her evidence for p mounts up, when the evidence has crossed the threshold needed for knowledge, the value of Alice’s state with respect to p will have suddenly and discontinuously increased.

This hypothesis initially seemed to me to have an unfortunate consequence. Suppose Alice has just barely exceeded the threshold for knowledge of p, and she is offered a cost-free piece of information that may turn out to slightly increase or slightly decrease her overall evidence with respect to p, where the decrease would be sufficient to lose her knowledge of p (since she has only “barely” exceeded the evidential threshold for knowledge). It seems that Alice should refuse to look at the information, since the benefit of a slight improvement in credence if the evidence is non-misleading is outweighed by the danger of a significant and discontinuous loss of value due to loss of knowledge.

But that’s not quite right. For from Alice’s point of view, because the threshold for knowledge is not 1, there is a real possibility that p is false. But it may be that just as there is a discontinuous gain in epistemic value when your (rational) credence becomes sufficient for knowledge of something that is in fact true, it may be that there is a discontinuous loss of epistemic value when your credence becomes sufficient for knowledge of something false. (Of course, you can’t know anything false, but you can have evidence-sufficient-for-knowledge with respect to something false.) This is not implausible, and given this, by looking at the information, by her lights Alice also has a chance of a significant gain in value due to losing the illusion of knowledge in something false.

If we think that it’s never rational for a rational agent to refuse free information, then the above argument can be made rigorous to establish that any discontinuous rise in the epistemic value of credence at the point at which knowledge of a truth is reached is exactly mirrored by a discontinuous fall in the epistemic value of a state of credence where seeming-knowledge of a falsehood is reached. Moreover, the rise and the fall must be in the ratio 1 − r : r where r is the knowledge threshold. Note that for knowledge, r is plausibly pretty large, around 0.95 at least, and so the ratio between the special value of knowledge of a truth and the special disvalue of evidence-sufficient-for-knowledge for a falsehood will need to be at most 1:19. This kind of a ratio seems intuitively implausible to me. It seems unlikely that the special disvalue of evidence-sufficient-for-knowledge of a falsehood is an order of magnitude greater than the special value of knowledge. This contributes to my scepticism that there is a special value of knowledge.

Can we rigorously model this kind of an epistemic value assignment? I think so. Consider the following discontinuous accuracy scoring rule s1(x,t), where x is a probability and t is a 0 or 1 truth value:

  • s1(x,t) = 0 if 1 − r ≤ x ≤ r

  • s1(x,t) = a if r < x and t = 1 or if x < 1 − r and t = 0

  • s1(x,t) =  − b if r < x and t = 0 or if x < 1 − r and t = 1.

Suppose that a and b are positive and a/b = (1−r)/r. Then if my scribbled notes are correct, it is straightforward but annoying to check that s1 is proper, and it has a discontinuous reward a for meeting threshold r with respect to a truth and a discontinuous penalty  − a for meeting threshold r with respect to a falsehood. To get a strictly proper scoring rule, just add to it any strictly proper continous accuracy scoring rule (e.g., Brier).

Tuesday, January 24, 2023

Thresholds and precision

In a recent post, I noted that it is possible to cook up a Bayesian setup where you don’t meet some threshold, say for belief or knowledge, with respect to some proposition, but you do meet the same threshold with respect to the claim that after you examine a piece of evidence, then you will meet the threshold. This is counterintuitive: it seems to imply that you can know that you will have enough evidence to know something even though you don’t yet. In a comment, Ian noted that one way out of this is to say that beliefs do not correspond to sharp credences. It then occurred to me that one could use the setup to probe the question of how sharp our credences are and what the thresholds for things like belief and knowledge are, perhaps complementarily to the considerations in this paper.

For suppose we have a credence threshold r and that our intuitions agree that we can’t have a situation where:

  1. we have transparency as to our credences,

  2. we don’t meet r with respect to some proposition p, but

  3. we meet r with respect to the proposition that we will meet the threshold with respect to p after we examine evidence E.

Let α > 0 be the “squishiness” of our credences. Let’s say that for one credence to be definitely bigger than another, their difference has to be at least α, and that to definitely meet (fail to meet) a threshold, we must be at least α above (below) it. We assume that our threshold r is definitely less than one: r + α ≤ 1.

We now want this constraint on r and α:

  1. We cannot have a case where (a), (b) and (c) definitely hold.

What does this tell us about r and α? We can actually figure this out. Consider a test for p that have no false negatives, but has a false positive rate of β. Let E be a positive test result. Our best bet to generating a counterexample to (a)–(c) will be if the priors for p are as close to r as possible while yet definitely below, i.e., if the priors for p are r − α. For making the priors be that makes (c) easier to definitely satisfy while keeping (b) definitely satisfied. Since there are no false negatives, the posterior for p will be:

  1. P(p|E) = P(p)/P(E) = (rα)/(rα+β(1−(rα))).

Let z = r − α + β(1−(rα)) = (1−β)(rα) + β. This is the prior probability of a positive test result. We will definitely meet r on a positive test result just in case we have (rα)/z = P(p|E) ≥ r + α, i.e., just in case

  1. z ≤ (rα)/(r+α).

(We definitely won’t meet r on a negative test result.) Thus to get (c) definitely true, we need (3) to hold as well as the probability of a positive test result to be at least r + α:

  1. z ≥ r + α.

Note that by appropriate choice of β, we can make z be anything between r − α and 1, and the right-hand-side of (3) is at least r − α since r + α ≤ 1. Thus we can make (c) definitely hold as long as the right-hand-side of (3) is bigger than or equal to the right-hand-side of (4), i.e., if and only if:

  1. (r+α)2 ≤ r − α

or, equivalently:

  1. α ≤ (1/2)((1+6r−3r2)1/2−1−r).

It’s in fact not hard to see that (6) is necessary and sufficient for the existence of a case where (a)–(c) definitely hold.

We thus have our joint constraint on the squishiness of our credences: bad things happen if our credences are so precise as to make (6) true with respect to a threshold r for which we don’t want (a)–(c) to definitely hold. The easiest scenario for making (a)–(c) definitely hold will be a binary test with no false negatives.

We thus have our joint constraint on the squishiness of our credences: bad things happen if our credences have a level of precision equal to the right-hand-side of (6). What exactly that says about α depends on where the relevant threshold lies. If the threshold r is 1/2, the squishiness α is 0.15. That’s surely higher than the actual squishiness of our credences. So if we are concerned merely with the threshold being more-likely-than-not, then we can’t avoid the paradox, because there will be cases where our credence is definitely below the threshold and it’s definitely above the threshold that examination of the evidence will push us about the threshold.

But what’s a reasonable threshold for belief? Maybe something like 0.9 or 0.95. At r = 0.9, the squishiness needed for paradox is α = 0.046. I suspect our credences are more precise than that. If we agree that the squishiness of our credences is less than 4.6%, then we have an argument that the threshold for belief is more than 0.9. On the other hand, at r = 0.95, the squishiness needed for paradox is 2.4%. At this point, it becomes more plausible that our credences lack that kind of precision, but it’s not clear. At r = 0.98, the squishiness needed for paradox dips below 1%. Depending on how precise we think our credences are, we get an argument that the threshold for belief is something like 0.95 or 0.98.

Here's a graph of the squishiness-for-paradox α against the threshold r:

Note that the squishiness of our credences likely varies with where the credences lie on the line from 0 to 1, i.e., varies with respect to the relevant threshold. For we can tell the difference between 0.999 and 1.000, but we probably can’t tell the difference between 0.700 and 0.701. So the squishiness should probably be counted relative to the threshold. Or perhaps it should be correlated to log-odds. But I need to get to looking at grad admissions files now.

Saturday, January 21, 2023

Knowing you will soon have enough evidence to know

Suppose I am just the slightest bit short of the evidence needed for belief that I have some condition C. I consider taking a test for C that has a zero false negative rate and a middling false positive rate—neither close to zero nor close to one. On reasonable numerical interpretations of the previous two sentences:

  1. I have enough evidence to believe that the test would come out positive.

  2. If the test comes out positive, it will be another piece of evidence for the hypothesis that I have C, and it will push me over the edge to belief that I have C.

To see that (1) is true, note that the test is certain to come out positive if I have C and has a significant probability of coming out positive even if I don’t have C. Hence, the probability of a positive test result will be significantly higher than the probability that I have C. But I am just the slightest bit short of the evidence needed for belief that I have C, so the evidence that the test would be positive (let’s suppose a deterministic setting, so we have no worries about the sense of the subjunctive conditional here) is sufficient for belief.

To see that (2) is true, note that given that the false negative rate is zero, and the false positive rate is not close to one, I will indeed have non-negligible evidence for C if the test is positive.

If I am rational, my beliefs will follow the evidence. So if I am rational, in a situation like the above, I will take myself to have a way of bringing it about that I believe, and do so rationally, that I have C. Moreover, this way of bringing it about that I believe that I have C will itself be perfectly rational if the test is free. For of course it’s rational to accept free information. So I will be in a position where I am rationally able to bring it about that I rationally believe C, while not yet believing it.

In fact, the same thing can be said about knowledge, assuming there is knowledge in lottery situations. For suppose that I am just the slightest bit short of the evidence needed for knowledge that I have C. Then I can set up the story such that:

  1. I have enough evidence to know that the test would come out positive,

and:

  1. If the test comes out positive, I will have enough evidence to know that I have C.

In other words, oddly enough, just prior to getting the test results I can reasonably say:

  1. I don’t yet have enough evidence to know that I have C, but I know that in a moment I will.

This sounds like:

  1. I don’t know that I have C but I know that I will know.

But (6) is absurd: if I know that I will know something, then I am in a position to know that the matter is so, since that I will know p entails that p is true (assuming that p doesn’t concern an open future). However, there is no similar absurdity in (5). I may know that I will have enough evidence to know C, but that’s not the same as knowing that I will know C or even be in a position to know C. For it is possible to have enough evidence to know something without being in a position to know it (namely, when the thing isn’t true or when one is Gettiered).

Still, there is something odd about (5). It’s a bit like the line:

  1. After we have impartially reviewed the evidence, we will execute him.

Appendix: Suppose the threshold for belief or knowledge is r, where r < 1. Suppose that the false-positive rate for the test is 1/2 and the false-negative rate is zero. If E is a positive test result, then P(C|E) = P(C)P(E|C)/P(E) = P(C)/P(E) = 2P(C)/(1+P(C)). It follows by a bit of algebra that if my prior P(C) is more than r/(2−r), then P(C|E) is above the threshold r. Since r < 1, we have r/(2−r) < r, and so the story (either in the belief or knowledge form) works for the non-empty range of priors strictly between r/(2−r) and r.

Friday, October 28, 2022

Does our ignorance always grow when we learn?

Here is an odd thesis:

  1. Whenever you gain a true belief, you gain a false belief.

This follows from:

  1. Whenever you gain a belief, you gain a false belief.

The argument for (2) is:

  1. You always have at least one false belief.

  2. You believe a conjunction if and only if you believe the conjuncts.

  3. Suppose you just gained a belief p.

  4. There is now some false belief q that you have. (By (3))

  5. Before you gained the belief p you didn’t believe the conjunction of p and q. (By (4))

  6. So, you just gained the belief in the conjunction of p and q. (By (5) and (7))

  7. The conjunction of p and q is false. (By (6))

  8. So, you just gained a false belief. (By (8) and (9))

I am not sure I accept (4), though.

Tuesday, October 25, 2022

Learning from what you know to be false

Here’s an odd phenomenon. Someone tells you something. You know it’s false, but their telling it to you raises the probability of it.

For instance, suppose at the beginning of a science class you are teachingabout your studnts about significant figures, and you ask a student to tell you the mass of a textbook in kilograms. They put it on a scale calibrated in pounds, look up on the internet that a pound is exactly 0.45359237 kg, and report that the mass of the object is 1.496854821 kg.

Now, you know that the classroom scale is not accurate to ten significant figures. The chance that the student’s measurement was right to ten significant figures is tiny. You know that the student’s statement is wrong, assuming that it is in fact wrong.

Nonetheless, even though you know the statement is wrong, it raises the probability that the textbook’s mass is 1.496854821 kg (to ten significant figures). For while most of the digits are garbage, the first couple are likely close. Before you you heard the student’s statement, you might have estimated the mass as somewhere between one and two kilograms. Now you estimate it as between 1.45 and 1.55 kg, say. That raises the probability that in fact, up to ten significant figures, the mass is 1.496854821 kg by about a factor of ten.

So, you know that what the student says is false, but your credence in the content has just gone up by a factor of ten.

Of course, some people will want to turn this story into an argument that you don’t know that the student’s statement is wrong. My preference is just to make this statement another example of why knowledge is an unhelpful category.