Everything that exists is intrinsically valuable.
Shadows and holes are not intrinsically values.
So, neither shadows nor holes exist.
Tuesday, February 18, 2025
An example of a value-driven epistemological approach to metaphysics
Monday, January 27, 2025
Comparing experiments
When you’re investigating reality as a scientist (and often as an ordinary person) you perform experiments. Epistemologists and philosophers of science have spent a lot of time thinking about how to evaluate what you should do with the results of the experiments—how they should affect your beliefs or credences—but relatively little on the important question of which experiments you should perform epistemologically speaking. (Of course, ethicists have spent a good deal of time thinking about which experiments you should not perform morally speaking.) Here I understand “experiment” in a broad sense that includes such things as pulling out a telescope and looking in a particular direction.
One might think there is not much to say. After all, it all depends on messy questions of research priorities and costs of time and material. But we can at least abstract from the costs and quantify over epistemically reasonable research priorities, and define:
- E2 is epistemically at least as good an experiment as E1 provided that for every epistemically reasonable research priority, E2 would serve the priority at least as well as E1 would.
That’s not quite right, however. For we don’t know how well an experiment would serve a research priority unless we know the result of the experiment. So a better version is:
- E2 is epistemically at least as good an experiment as E1 provided that for every epistemically reasonable research priority, the expected degree to which E2 would serve the priority is at least as high as the expected degree to which E1 would.
Now we have a question we can address formally.
Let’s try.
- A reasonable epistemic research priority is a strictly proper scoring rule or epistemic utility, and the expected degree to which an experiment would serve that priority is equal to the expected value of the score after Bayesian update on the result of the experiment.
(Since we’re only interested in expected values of scores, we can replace “strictly proper” with “strictly open-minded”.)
And we can identify an experiment with a partition of the probability space: the experiment tells us where we are in that partition. (E.g., if you are measuring some quantity to some number of significant digits, the cells of the partition are equivalence classes under equality of the quantity up to those many significant digits.) The following is then easy to prove:
Proposition 1: On definitions (2) and (3), an experiment E2 is epistemically at least as good as experiment E1 if and only if the partition associated with E2 is essentially at least as fine as the partition associated with E1.
A partition R2 is essentially at least as fine as a partition R1 provided that for every event A in R1 there is an event B in R2 such that with probability one B happens if and only if A happens. The definition is relative to the current credences which are assumed to be probabilistic. If the current credences are regular—all non-empty events have non-zero probability—then “essentially” can be dropped.
However, Proposition 1 suggests that our choice of definitions isn’t that helpful. Consider two experiments. On E1, all the faculty members from your Geology Department have their weight measured to the nearest hundred kilograms. On E2, a thousand randomly chosen individiduals around the world have their weight measured to the nearest kilogram. Intuitively, E1 is better. But Proposition 1 shows that in the above sense neither experiment is better than the other, since they generate partitions neither of which is essentially finer than the other (the event of there being a member of the Geology Department with weight at least 150 kilograms is in the partition of E2 but nothing coinciding with that event up to probability zero is in the partition of E1). And this is to be expected. For suppose that our research priority is to know whether any members of your Geology Department are at least than 150 kilograms in weight, because we need to know if for a departmental cave exploring trip the current selection of harnesses all of which are rated for users under 150 kilograms are sufficient. Then E1 is better. On the other hand, if our research priority is to know the average weight of a human being to the nearest ten kilograms, then E2 is better.
The problem with our definitions is that the range of possible research priorities is just too broad. Here is one interesting way to narrow it down. When we are talking about an experiment’s epistemic value, we mean the value of the experiment towards a set of questions. If the set of questions is a scientifically typical set of questions about human population weight distribution, then E1 seems better than E2. But if it is an atypical set of questions about the Geology Department members’ weight distribution, then E2 might be better. We can formalize this, too. We can identify a set Q of questions with a partition of probability space representing the possible answers. This partition then generates an algebra FQ on the probability space, which we can call the “question algebra”. Now we can relativize our definitions to a set of questions.
E2 is epistemically at least as good an experiment as E1 for a set of questions Q provided that for every epistemically reasonable research priority on Q, the expected degree to which E2 would serve the priority is at least as high as the expected degree to which E1 would.
A reasonable epistemic research priority on a set of questions Q is a strictly proper scoring rule or epistemic utility on FQ, and the expected degree to which an experiment would serve Q is equal to the expected value of the score after Bayesian update on the result of the experiment.
We recover the old definitions by being omnicurious, namely letting Q be all possible questions.
What about Proposition 1? Well, one direction remains: if E2’s partition is essentially at least as fine as E1’s, then E2 is better with regard any set of questions, an in particular better with regard to Q. But what about the other direction? Now the answer is negative. Suppose the question is what the average weight of the six members of the Geology Department is up to the nearest 100 kg. Consider two experiments: on the first, the members are ordered alphabetically by first name, and a fair die is rolled to choose one (if you roll 1, you choose the first, etc.), and their height is measured. On the second, the same is done but with the ordering being by last name. Assuming the two orderings are different, neither experiment’s partition is essentially at least as fine as the other’s, but the expected contributions of both experiments towards our question is equal.
Is there a nice characterization in terms of partitions of when E2 is at least as good as E1 with regard to a set of questions Q? I don’t know. It wouldn’t surprise me if there was something in the literature. A nice start would be to see if we can answer the question in the special case where Q is a single binary question and where E1 and E2 are binary experiments. But I need to go for a dental appointment now.
Monday, August 5, 2024
Natural reasoning vs. Bayesianism
A typical Bayesian update gets one closer to the truth in some respects and further from the truth in other respects. For instance, suppose that you toss a coin and get heads. That gets you much closer to the truth with respect to the hypothesis that you got heads. But it confirms the hypothesis that the coin is double-headed, and this likely takes you away from the truth. Moreover, it confirms the conjunctive hypothesis that you got heads and there are unicorns, which takes you away from the truth (assuming there are no unicorns; if there are unicorns, insert a “not” before “are”). Whether the Bayesian update is on the whole a plus or a minus depends on how important the various propositions are. If for some reason saving humanity hangs on you getting it right whether you got heads and there are unicorns, it may well be that the update is on the whole a harm.
(To see the point in the context of scoring rules, take a weighted Brier score which puts an astronomically higher weight on you got heads and there are unicorns than on all the other propositions taken together. As long as all the weights are positive, the scoring rule will be strictly proper.)
This means that there are logically possible update rules that do better than Bayesian update. (In my example, leaving the probability of the proposition you got heads and there are unicorns unchanged after learning that you got heads is superior, even though it results in inconsistent probabilities. By the domination theorem for strictly proper scoring rules, there is an even better method than that which results in consistent probabilities.)
Imagine that you are designing a robot that maneouvers intelligently around the world. You could make the robot a Bayesian. But you don’t have to. Depending on what the prioritizations among the propositions are, you might give the robot an update rule that’s superior to a Bayesian one. If you have no more information than you endow the robot with, you won’t be able to expect to be able to design such an update rule. (Bayesian update has optimal expected accuracy given the pre-update information.) But if you know a lot more than you tell the robot—and of course you do—you might well be able to.
Imagine now that the robot is smart enough to engage in self-reflection. It then notices an odd thing: sometimes it feels itself pulled to make inferences that do not fit with Bayesian update. It starts to hypothesize that by nature it’s a bad reasoner. Perhaps it tries to change its programming to be more Bayesian. Would it be rational to do that? Or would it be rational for it to stick to its programming, which in fact is superior to Bayesian update? This is a difficult epistemology question.
The same could be true for humans. God and/or evolution could have designed us to update on evidence differently from Bayesian update, and this could be epistemically superior (God certainly has superior knowledge; evolution can “draw on” a myriad of information not available to individual humans). In such a case, switching from our “natural update rule” to Bayesian update would be epistemically harmful—it would take us further from the truth. Moreover, it would be literally unnatural. But what does rationality call on us to do? Does it tell us to do Bayesian update or to go with our special human rational nature?
My “natural law epistemology” says that sticking with what’s natural to us is the rational thing to do. We shouldn’t redesign our nature.
Thursday, February 23, 2023
Morality and the gods
In the Meno, we get a solution to the puzzle of why it is that virtue does not seem, as an empirical matter of fact, to be teachable. The solution is that instead of involving knowledge, virtue involves true belief, and true belief is not teachable in the way knowledge is.
The distinction between knowledge and true belief seems to be that knowledge is true opinion made firm by explanatory account (aitias logismoi, 98a).
This may seem to the modern philosophical reader to confuse explanation and justification. It is justification, not explanation, that is needed for knowledge. One can know that sunflowers turn to the sun without anyone knowing why or how they do so. But what Plato seems to be after here is not merely justified true belief, but something like the scientia of the Aristotelians, an explanatorily structured understanding.
But not every area seems like the case of sunflowers. There would be something very odd in a tribe knowing Fermat’s Last Theorem to be true, but without anybody in the tribe, or anybody in contact with the tribe, having anything like an explanation or proof. Mathematical knowledge of non-axiomatic claims typically involves something explanation-like: a derivation from first principles. We can, of course, rely on an expert, but eventually we must come to something proof-like.
I think ethics is in a way similar. There is something very odd about having justified true belief—knowledge in the modern sense—of ethical truths but not knowing why they are true. Yet it seems humans are often in this position. They know the ethical truths but not why they are true. Yet they have correct, and maybe even justified, moral judgments about many things. What explains this?
Socrates’ answer in the Meno is that it is the gods. The gods instill true moral opinion in people (especially the poets).
This is not a bad answer.
Thursday, June 23, 2022
What I think is wrong with Everettian quantum mechanics
One can think of Everettian multiverse quantum mechanics as beginning by proposing two theses:
The global wavefunction evolves according to the Schroedinger equation.
Superpositions in the global wavefunction can be correctly interpreted as equally real branches in a multiverse.
But prima facie, these two theses don’t fit with observation. If one prepares a quantum system in a (3/5)|↑⟩+(4/5)|↓⟩ spin state, and then observes the spin, one will will observe spin up in |3/5|^2=9/25 cases and spin down in |4/5|^2=16/25 cases. But (roughly speaking) there will be two equally real branches corresponding to this result, and so prima facie one would expect equally likely observations, which doesn't fit observation. But the Everettian adds a third thesis:
- One ought to make predictions as to which branch one will observe proportionately to the square of the modulus of the coefficients that the branch has in the global wavefunction.
Since Aristotelian science has been abandoned, there has been a fruitful division of labor between natural science and philosophy, where investigation of normative phenomena has been relegated to philosophy while science concerned itself with the non-normative. From that point of view, while (1) and (less clearly but arguably) (2) belong to the domain of science, (3) does not. Instead, (3) belongs to epistemology, which is study of the norms of thought.
This point is not a criticism. Just as a doctor who has spent much time dealing sensitively with complex cases will have unique insights into bioethics, a scientist who has spent much time dealing sensitively with evidence will have unique insights into scientific epistemology. But it is useful, because the division of intellectual labor is useful, to remember that (3) is not a scientific claim in the modern sense. And there is nothing wrong with that as such, since many non-scientific claims, such as that one shouldn’t lie and that one should update by conditionalization, are true and important to the practice of the scientific enterprise.
But (3) is a non-scientific claim that is absurd. Imagine that a biologist came up with a theory that predicted, on the basis of their genetics and environment, that:
- There are equal numbers of male and female infant spider monkeys.
You might have thought that this theory is empirically disproved by observations of a lot more female than male infant spider monkeys. But our biologist is clever, and comes up with this epistemological theory:
- One ought to make predictions as to the sex of an infant spider monkey one will observe in inverse proportion to the ninth power of the average weight of that sex of spider monkeys.
And now, because male spider monkeys are slightly larger than females, we will make predictions that roughly fit our observations.
Here’s what went wrong in our silly biological example. The biologist’s epistemological claim (5) was not fitted to the actual ontology of the biologist’s theory. Instead, basically, the biologist said: when making predictions of future observations, make them in the way that you should if you thought the sex ratios were inversely proportional to the ninth power of the average weights, even though they aren’t.
This is silly. But exactly the same thing is going on in the Everett case. We are being told to make predictions in the way you should if the modulus squares of the weights in the superposition were chances of collapse. But they are not.
It is notorious that any scientific theory can be saved from empirical disconfirmation by adding enough auxiliary scientific hypotheses. But one can also save any scientific theory from empirical disconfirmation by adding an auxiliary philosophical hypothesis as to how confirmation or disconfirmation ought to proceed. And doing that may be worse than obstinately adding auxiliary scientific hypotheses. For auxiliary scientific hypotheses can often be tested and disproved. But an auxiliary epistemological hypothesis may simply close the door to refutation.
To put it positively, we want a certain degree of independence between epistemological principles and the ontology of a theory so that the ontology of the theory can be judged by the principles.
Monday, November 1, 2021
Determinism and thought
Occasionally, people have thought that one can refute determinism as follows:
If determinism is true, then all our thinking is determined.
If our thinking is determined, then it is irrational to trust its conclusions.
It is not irrational to trust the conclusions of our thinking.
So, determinism is not true.
But now notice that, plausibly, even if we have indeterministic free will, other animals don’t. And yet it seems at least as reasonable to trust a dog’s epistemic judgment—say, as to the presence of an intruder—as a human’s. Nor would learning that a dog’s thinking is determined or not determined make any difference to our trust in its reliability.
One might respond that things are different in a first-person case. But I don’t see why.
Friday, September 10, 2021
Comparing the epistemic relevance of measurements
Suppose P is a regular probability on (the powerset of) a finite space Ω representing my credences. A measurement M is a partition of Ω into disjoint events E1, ..., En, with the result of the experiment being one of these events. In a given context, my primary interest is some subalgebra F of the powerset of Ω.
Note that a measurement can be epistemically relevant to my primary interest without any of the events in in the measurement being something I have a primary interest in. If I am interested in figuring out whether taller people smile more, my primary interest will be some algebra F generated by a number of hypotheses about degree to which height and smiliness are correlated in the population. Then, the measurement of Alice’s height and smiliness will not be a part of my primary interest, but it will be epistemically relevant to my primary interest.
Now, some measurements will be more relevant with respect to my primary interest than others. Measuring Alice’s height and smiliness will intuitively be more relevant to my primary interest about height/smile correlation, while measuring Alice’s mass and eye color will be less so.
The point of this post is to provide a relevance-based partial ordering on possible measurements. In fact, I will offer three, but I believe they are equivalent.
First, we have a pragmatic ordering. A measurement M1 is at least as pragmatically relevant to F as a measurement M2, relative to our current (prior) credence assignment P, just in case for every possible F-based wager W, the P-expected utility of wagering on W after a Bayesian update on the result of M1 is at least as big as that of wagering of W after updating on the result of M2, and M1 is more relevant if for some wager W the utility of wagering after updating on the result of M1 is strictly greater.
Second, we have an accuracy ordering. A measurement M1 is at least as accuracy relevant to F as a measurement M2 just in case for every proper scoring rule s on F, the expected score of updating on the result of M1 is better than or equal to the expected score of updating on the result of M2, and M1 is more relevant when for some scoring rule the expected score is better in the case of M1.
Third, we have a geometric ordering. Let HP, F(M) be the horizon of a measurement M, namely the set of all possible posterior credence assignments on F obtained by starting with P, conditionalizing on one of the possible events in that M partitions Ω into, and restricting to F. Then we say that M1 is at least as (more) geometrically relevant to F as M2 just in case the convex hull of the horizon of M1 contains (strictly contains) the convex hull of the horizon of M2.
I have not written out the details, but I am pretty sure that all three orderings are equivalent, which suggests that I am on to something with these concepts.
An interesting special case is when one’s interest is binary, an algebra generated by a single hypothesis H, and the measurements are binary, i.e., partitions into two sets. In that case, I think, a measurement M1 is at least as (more) relevant as a measurement M2 if and only if the interval whose endpoints are the Bayes factors of the events in M1 contains (strictly contains) the interval whose endpoints are the Bayes factors of the events in M2.
Thursday, September 9, 2021
What question should I ask?
Epistemology is heavily focused on the question of evaluating a doxastic state given a set of evidence: is the state rational or irrational, is it knowledge or opinion, etc. This can be useful to the epistemic life of an agent, but there is something else that is at least as useful and does not get discussed nearly as much: the question of how we should go about gathering evidence or, equivalently, what experiments (broadly understood) we should perform.
[The rest of the post was based on a mathematical error and has been deleted. The next post is an attempt to fix the error.]
Friday, July 9, 2021
Naturalness and induction
David Lewis’s notion of the naturalness of predicates may seem at first sight like just the thing to solve Goodman’s new puzzle of induction: unlike green, grue is too unnatural for induction with respect to grue to be secure.
But this fails.
Roughly speaking, an object is green provided its emissivity or reflectivity as restricted to the visible range has a sufficiently pronounced peak around 540 nm. But in reality, it’s more complicated than that. An object’s emissivity and reflectivity might well have significantly different spectral profiles (think of a red LED that is reflectively white, as can be seen by turning it off), and one needs to define some sort of “normal conditions” combination of the two features. Describing these normal conditions will be quite complex, thereby making the concept of green be quite far from natural.
Now, it is much easier to define the concepts of emissively black (eblack) and emissively white (ewhite) than of green (or black or white, for that matter) in terms of the fundamental concepts of physics. And emeralds, we think, are eblack (since they don’t emit visible light). Then, just as Goodman defined grue as being observed before a certain date and being green and or being observed after that date and being blue, we can define eblite as existing wholly before 2100 and being eblack or existing wholly after 2100 and being ewhite. And here is the crucial thing: the concept of eblite is actually way more natural, in the Lewis sense of “natural”, than the concept of green. For the definition of eblite does not require the complexities of the normal conditions combination of emissivity and reflectivity.
Thus, if what makes induction with green work better than induction with grue is that greenness is more natural than grueness, then induction with eblite (over short-lived entities like snowflakes, say) should work even better than induction with green, since ebliteness is much more natural than grueness. But we know that we shouldn’t do induction with eblite: even though all the snowflakes we have observed are eblite, we shouldn’t assume that in the next century the snowflaskes will still be eblite (i.e., that they will start to have a white glow). Or, contrapositively, if eblite is insufficiently natural for induction, green is much too unnatural for induction.
Moreover, this points to a better story. Lewisian unnaturalness measures the complexity of a property relative to the properties that are in themselves perfectly natural. But this is unsatisfactory for epistemological purposes, since the perfectly natural properties are ones that we are far from having discovered as yet. Rather, for epistemological purposes, what we want to do is measure the complexity of a property relative to the properties that are for us perfectly natural. (This, of course, is meant to recall Aristotle’s distinction between what is more understandable in itself and what is more understandable for us.) The properties that are for us perfectly natural are the directly observable ones. And now the in itself messy property of greenness beats not only grue and eblite, but even the much more in itself natural property of eblack.
This can’t be the whole story. In more scientifically developed cases, we will have an interplay of induction with respect to for us natural properties (including ones involved in reading data off lab equipment) and in themselves natural properties.
And there is the deep puzzle of why we should trust induction with respect to what is merely for us natural. My short answer is it that it is our nature to do so, and our nature sets our epistemic norms.
Wednesday, January 13, 2021
Epistemology and the presumption of (im)permissibility
Normally, our overt behavior has the presumption of moral permissibility: an action is morally permissible unless there is some specific reason why it would be morally impermissible.
Oddly, this is not so in epistemology. Our doxastic behavior seems to come along with a presumption of epistemic impermissibility. A belief or inference is only justified when there is a specific reason for that justification.
In ethics, there are two main ways of losing the presumption of moral permissibility in an area of activity.
The first is that actions falling in that area are prima facie bad, and hence a special justification is needed for them. Violence is an example: a violent action is by default impermissible, unless we have a special reason that makes it permissible. The second family of cases is areas of action that are dangerous. When we go into a nuclear power facility or a functioning temple, we are surrounded by danger—physical or religious—and we should refrain from actions unless we have special reason to think they are safe.
Belief isn’t prima facie bad. But maybe it is prima facie dangerous? But the presumption of impermissibility is not limited to some special areas. There indeed are dangerous areas of our doxastic lives: having the wrong religious beliefs can seriously damage us psychologically and spiritually while having the wrong beliefs about nutrition and medicine can kill us. But there seem to be safe areas of our doxastic lives: whatever I believe about the last digit in the number of hairs on my head or about the generalized continuum hypothesis seems quite safe. Yet, having the unevidenced belief that the last digit in the number of hairs on my head is three is just as impermissible as having the unevidenced belief that milk cures cancer.
Perhaps it is simply that moral and epistemic normativity are not as analogous as they have seemed to some.
But there is another option. Perhaps, despite what I said, our doxastic lives are always dangerous. Here is one way to suggest this. Perhaps truth is sacred, and so dealing with truth is dangerous just as it is dangerous to be in a temple. We need reason to think that the rituals we perform are right when we are in a temple—we should not proceed by whim or by trial and error in religion—and perhaps similarly we need reasons to think that our beliefs are true, precisely because our doxastic lives always, no matter how “secular” the content, concern the sacred. Our beliefs may be practically safe, but the category of the sacred always implicates a danger, and hence a presumption of impermissibility.
I can think of two ways our doxastic lives could always concern the sacred:
God is truth.
All truth is about God: every truth is contingent or necessary; contingent truths tell us about what God did or permitted; necessary truths are all grounded in the nature of God.
All this also fits with an area of our moral lives where there is a presumption of impermissibility: assertion. One should only make assertions when one has reason to think they are true. Otherwise, one is lying or engaging in BS. Yet assertion is not always dangerous in any practical sense of “dangerous”: making unwarranted assertions about the number of hairs one one’s head or the general continuum hypothesis is pretty safe practically speaking. But perhaps assertion also concerns the truth, which is something sacred, and where we are dealing with the sacred, there we have spiritual danger and a presumption of impermissibility.
Tuesday, February 25, 2020
Animal consciousness
I wonder if a non-theist can be reasonably confident that non-human animals feel pain.
Start with functionalism. The precise functional system involved with feeling pain in us has no isomorph in non-human animals. For instance, in us, damage data from the senses is routed through a decision subsystem that rationally weighs moral considerations along with considerations of self-interest prior to deciding whether one should flee the stimulus, while in non-human animals there are (as far as we know) no moral considerations.
We can now have two hypotheses about what functional system is needed for pain: (a) there needs to be a weighing of damage data along with specifically moral consideration inputs, or (b) there just needs to be a weighing of damage data along with other inputs of whatsoever sort.
We cannot do any experiments to distinguish the two hypotheses. For the two hypotheses predict the same overt behavior. And even self-experimentation will be of no use. I suppose one could—at serious ethical risk—try to disable the brain’s processing of moral data, and prick oneself with a pin and check if it hurts. But while the two hypotheses do make different predictions as to what would happen in such a case, they do not make different predictions as to what one would remember after the experiment was done or how one would behave during the experiment.
Similar problems arise for every other theory of mind I know of. For in all of them, it seems we are not in a position to know precisely which range of neural structures gives rise to pain. For instance, on emergentism we know that pain emerges from our neural structures, but it seems we have no way of knowing how far we can depart from our neural structures and still get pain. On Searle-style biologism, where functionalistically irrelevant biological detail is essential for mental properties, it seems we have no way of figuring out which biological biological details permit mental function. And so on.
I know of only one story about how we can be reasonably confident that non-human animals feel pain: God, who knows everything, creates us with the intuition that certain behaviors mean pain, and in fact these behaviors do occur in non-human animals.
Tuesday, October 16, 2018
Yet another reason we need social epistemology
Consider forty rational people each individually keeping track of the ethnicities and virtue/vice of the people they interact with and hear about (admittedly, one wonders why a rational person would do that!). Even if there is no statistical connection—positive or negative—between being Polish and being morally vicious, random variation in samples means that we would expect two of the forty people to gain evidence that there is a statistically significant connection—positive or negative—between being Polish and being morally vicious at the p = 0.05 level. We would, further, intuitively expect that one in the forty would come to conclude on the basis of their individual data that there is a statistically significant negative connection between Polishness and vice and one that there is a statistically significant positive connection.
It seems to follow that for any particular ethnic or racial or other group, at the fairly standard p = 0.05 significance level, we would expect about one in forty rational people to have a rational racist-type view about any particular group’s virtue or vice (or any other qualities).
If this line of reasoning is correct, it seems that it is uncharitable to assume that a particular racist’s views are irrational. For there is a not insignificant chance that they are just one of the unlucky rational people who got spurious p = 0.05 level confirmation.
Of course, the prevalence of racism in the US appears to be far above the 1/40 number above. However, there is a multiplicity of groups one can be a racist about, and the 1/40 number is for any one particular group. With five groups, we would expect that approximately 5/40=1/8 (more precisely 1 − (39/40)5) of rational people to get p = 0.05 confirmation of a racist-type hypothesis about one of the groups. That’s still presumably significantly below the actual prevalence of racism.
But in any case this line of reasoning is not correct. For we are not individual data gatherers. We have access to other people’s data. The widespread agreement about the falsity of racist-type claims is also evidence, evidence that would not be undercut by a mere p = 0.05 level result of one’s individual study.
So, we need social epistemology to combat racism.
Friday, August 10, 2018
Mathematical structures, physics and Bayesian epistemology
It seems that every mathematical structure (there are some technicalities as to how to define it) could metaphysically be the correct description of fundamental physical structure. This means that making Bayesianism be the whole story about epistemology—even for idealized agents—is a hopeless endeavor. For there is no hope for an epistemologically useful probability measure over the collection of all mathematical structures unless we rule out the vast majority of structures as having zero probability.
A natural law or divine command epistemology can solve this problem by requiring us to assign zero probability to some non-actual physical structures that are metaphysically possible but that our Creator wants us to be able to rule out a priori. In other words, our Creator can make us so that we only take epistemically seriously a small subset of the possibilia. This might help with the problem of scepticism, too.
Wednesday, April 11, 2018
Bayesianism and the multitude of mathematical structures
It seems that every mathematical structure (there are some technicalities as to how to define it) could in fact be the correct description of fundamental physical structure. This means that making Bayesianism be the whole story about epistemology—even for idealized agents—is a hopeless endeavor. For there is no hope for an epistemologically useful probability measure over the collection of all mathematical structures unless we rule out the vast majority of structures as having zero probability.
A natural law or divine command appendix to Bayesianism can solve this problem by requiring us to assign zero probability to some structures that are metaphysically possible but that our Creator wants us to be able to rule out a priori.
Tuesday, April 3, 2018
Divine command and natural law epistemology
I am impressed by the idea that other kinds of beings from humans can appropriately have different doxastic practices from ours, in light of:
a different environment which makes different practices truth-conductive, and
different proper goals for their doxastic practices (e.g., a difference of emphasis on explanation versus prediction; a difference in what subject matter is more important).
Option (a) is captured by reliabilism, but reliabilism does not by itself do much to help with (b), and suffers from an insuperable reference class problem.
I know of two epistemological theories that nicely capture the differences between epistemic practices in the light of both (a) and (b):
divine command epistemology: a doxastic practice is required just in case God commands it (variant: commands it in light of truth-based goods)
natural law epistemology: a doxastic practice is required just in case it is natural to its practitioner (variant: natural and ordered towards truth-based goods).
Both of these theories have an interesting meta-theoretic consequence: they make particularly weird thought experiments less useful in epistemology. For God’s reasons for requiring a doxastic practice may well be linked to our typical way of life, and a practice that is natural in one ecological niche may have unfortunate consequences outside that niche. (That’s sad for me, since making up weird thought experiments is something I particularly enjoy!)
(Note, however, that both of these theories have nothing to say on the question of knowledge. That’s a feature, not a bug. I think we don’t need a concept of (propositional) knowledge, just as we don’t need a concept of baldness. Anything worth saying using the language of “knowledge” or “baldness” can be more precisely said without it—one can talk of degrees of belief and justification, amount of scalp coverage, etc.—and while it’s an amusing question how exactly to analyze knowledge or baldness, it’s just that.)
Thursday, March 15, 2018
Something that has no reasonable numerical epistemic probability
I think I can give an example of something that has no reasonable (numerical) epistemic probability.
Consider Goedel’s Axiom of Constructibility. Goedel proved that if the Zermelo-Fraenkel (ZF) axioms are consistent, they are also consistent with Constructibility (C). We don’t have any strong arguments against C.
Now, either we have a reasonable epistemic probability for C or we don’t.
If we don’t, here is my example of something that has no reasonable epistemic probability: C.
If we do, then note that Goedel showed that ZF + C implies the Axiom of Choice, and hence implies the existence of non-measurable sets. Moreover, C implies that there is a well-ordering W on the universe of all sets that is explicitly definable in the language of set theory.
Now consider some physical quantity Q where we know that Q lies in some interval [x − δ, x + δ], but we have no more precise knowledge. If C is true, let U be the W-smallest non-measurable subset of [x − δ, x + δ].
Assuming that we do have a reasonable epistemic probability for C, here is my example of something that has no reasonable epistemic probability: C is false or Q is a member of U.
Tuesday, March 13, 2018
A third kind of moral argument
The most common kind of moral argument for theism is that theism better fits with there being moral truths (either moral truths in general, or some specific kind of moral truths, like that there are obligations) than alternative theories do. Often, though not always, this argument is coupled with a divine commmand theory.
A somewhat less common kind of argument is that theism better explains how we know moral truths. This argument is likely to be coupled with an evolutionary debunking argument to argue that if naturalism and evolution were true, our moral beliefs might be true, and might even be reliable, but wouldn’t be knowledge.
But there is a third kind of moral argument that one doesn’t meet much at all in philosophical circles—though I suspect it is not uncommon popularly—and it is that theism better explains why we have moral beliefs. The reason we don’t meet this argument much in philosophical circles is probably that there seems to be very plausible evolutionary explanations of moral beliefs in terms of kin selection and/or cultural selection. Social animals as clever as we are benefit as a group from moral beliefs to discourage secret anti-cooperative selfishness.
I want to try to rescue the third kind of moral argument in this post in two ways. First, note that moral beliefs are only one of several solutions to the problem of discouraging secret selfishness. Here are three others:
belief in karmic laws of nature on which uncooperative individuals get very undesirable reincarnatory outcomes
belief in an afterlife judgment by a deity on which uncooperative individuals get very unpleasant outcomes
a credence around 1/2 to an afterlife judgment by a deity on which uncooperative individuals get an infinitely bad outcome (cf. Pascal’s Wager).
These three options make one think that cooperativeness is prudent, but not that it is morally required. Moreover, they are arguably more robust drivers of cooperative behavior than beliefs about moral requirement. Admittedly, though, the first two of the above might lead to moral beliefs as part of a theory about the operation of the karmic laws or the afterlife judgment.
Let’s assume that there are important moral truths. Still, P(moral beliefs | naturalism) is not going to exceed 1/2. On the other hand, P(moral beliefs | God) is going to be high, because moral truths are exactly the sort of thing we would expect God to ensure our belief in (through evolutionary means, perhaps). So, the fact of moral belief will be evidence for theism over naturalism.
The second approach to rescuing the moral argument is deeper and I think more interesting. Moreover, it generalizes beyond the moral case. This approach says that a necessary condition for moral beliefs is being able to have moral concepts. But to have moral concepts requires semantic access to moral properties. And it is difficult to explain on contemporary naturalistic grounds how we have semantic access to moral properties. Our best naturalistic theories of reference are causal, but moral properties on contemporary naturalism (as opposed to, say, the views of a Plato or an Aristotle) are causally inert. Theism, however, can nicely accommodate our semantic access to moral properties. The two main theistic approaches to morality ground morality in God or in an Aristotelian teleology. Aristotelian teleology allows us to have a causal connection to moral properties—but then Aristotelian teleology itself calls for an explanation of our teleological properties that theism is best suited to give. And approaches that ground morality in God give God direct semantic access to moral properties, which semantic access God can extend to us.
This generalizes to other kinds of normativity, such as epistemic and aesthetic: theism is better suited to providing an explanation of how we have semantic access to the properties in question.
Monday, November 6, 2017
Projection and the imago Dei
There is some pleasing initial symmetry between how a theist (or at least Jew, Christian or Muslim) can explain features of human nature by invoking the doctrine that we are in the image of God and using this explanatory schema:
- Humans are (actually, normally or ideally) F because God is actually F
and how an atheist can explain features attributed to God by projection:
- The concept of God includes being actually F because humans are (actually, normally or ideally) F.
Note, however, that while schemata (1) and (2) are formally on par, schema (1) has the advantage that it has a broader explanatory scope than (2) does. Schema (1) explains a number of features (whether actual or normative) of the nature of all human beings, while schema (2) only explains a number of features of the thinking of a modest majority (the 55% who are monotheists) of human beings.
There is also another interesting asymmetry between (1) and (2). Theist can without any damage to their intellectual system embrace both (1) and a number of the instances of (2) that the atheist embraces, since given the imago Dei doctrine, projection of normative or ideal human features onto God can be expected to track truth with some probability. On the other hand, the atheist cannot embrace any instances of (1).
Note, too, that evolutionary explanations do not undercut (1), since there can be multiple correct explanations of one phenomenon. (This phenomenon is known to people working on Bayesian inference.)
Thursday, November 2, 2017
Four problems and a unified solution
A similar problem occurs in at least four different areas.
Physics: What explains the values of the constants in the laws of nature?
Ethics: What explains parameters in moral laws, such as the degree to which we should favor benefits to our parents over benefits to strangers?
Epistemology: What explains parameters in epistemic principles, such as the parameters in how quickly we should take our evidence to justify inductive generalizations, or how much epistemic weight we should put on simplicity?
Semantics: What explains where the lines are drawn for the extensions of our words?
There are some solutions that have a hope of working in some but not all the areas. For instance, a view on which there is a universe-spawning mechanism that induces random value of constants in laws of nature solves the physics problem, but does little for the other three.
On the other hand, vagueness solutions to 2-4 have little hope of helping in the physics case. Actually, though, vagueness doesn’t help much in 2-4, because there will still be the question of explaining why the vague regions are where they are and why they are fuzzy in the way there are—we just shift the parameter question.
In some areas, there might be some hope of having a theory on which there are no objective parameters. For instance, Bayesianism holds that the parameters are set by the priors, and subjective Bayesianism then says that there are no objective priors. Non-realist ethical theories do something similar. But such a move in the case of physics is implausible.
In each area, there might be some hope that there are simple and elegant principles that of necessity give rise to and explainingthe values of the parameters. But that hope has yet to be born out in any of the four cases.
In each area, one can opt for a brute necessity. But that should be a last resort.
In each area, there are things that can be said that simply shift the question about parameters to a similar question about other parameters. For instance, objective Bayesianism shifts the question of about how much epistemic weight we should put on simplicity to the question of priors.
When the questons are so similar, there is significant value in giving a uniform solution. The theist can do that. She does so by opting for these views:
Physics: God makes the universe have the fundamental laws of nature it does.
Ethics: God institutes the fundamental moral principles.
Epistemology: God institutes the fundamental epistemic principles for us.
Semantics: God institutes some fundamental level of our language.
In each of the four cases there is a question of how God does this. And in each there is a “divine command” style answer and a “natural law” style answer, and likely others.
In physics, the “divine command” style answer is occasionalism; in ethics and epistemology it just is “divine command”; and in semantics it is a view on which God is the first speaker and his meanings for fundamental linguistic structs are normative. None of these appeal very much to me, and for the same reason: they all make the relevant features extrinsic to us.
In physics, the “natural law” answer is theistic Aristotelianism: laws supervene on the natures of things, and God chooses which natures to instantiate; theistic natural law is a well-developed ethical theory, and there are analogues in epistemology and semantics, albeit not very popular ones.
Thursday, October 19, 2017
Conciliationism is false or trivial
Suppose you and I are adding up a column of expenses, but our only interest is the last digit for some reason. You and I know that we are epistemic peers. We’ve both just calculated the last digit, and a Carl asks: Is the last digit a one? You and I speak up at the same time. You say: “Probably not; my credence that it’s a one is 0.27.” I say: “Very likely; my credence that it’s a one is 0.99.”
Concialiationists now seem to say that I should lower my credence and you should raise yours.
But now suppose that you determine the credence for the last digit as follows: You do the addition three times, each time knowing that you have an independent 1/10 chance of error. Then you assign your credence as the result of a Bayesian calculation with equal priors over all ten options for the last digit. And since I’m your epistemic peer, I do it the same way. Moreover, while we’re poor at adding digits, we’re really good at Bayesianism—maybe we’ve just memorized a lot of Bayes’ factor related tables. So we don’t make mistakes in Bayesian calculations, but we do at addition.
Now I can reverse engineer your answer. If you say your credence in a one is 0.27, then I know that of your three calculations, one of them must have been a one. For if none of your calculations was a one, your credence that the digit was a one would have been very low and if two of your calculations yielded a one, your credence would have been quite high. There are now two options: either you came up with three different answers, or you had a one and then two answers that were the same. In the latter case, it turns out that your credence in a one would have been fairly low, around 0.08. So it must be that your calculations yielded a one, and then two other numbers.
And you can reverse engineer my answer. The only way my credence could be as high as 0.99 is if all three of my calculations yielded a one. So now we both know that my calculations were 1, 1, 1 and yours were 1, x, y where 1, x, y are all distinct. So now you aggregate this data, and I do the same as your peer. We have six calculations yielding 1, 1, 1, 1, x, y. A Bayesian analysis, given the fact that the chance of error in each calculation is 0.9, yields a posterior probability of 0.997.
So, your credence did go up. But mine went up too. Thus we can have cases where the aggregation of a high credence with a low credence results in an even higher credence.
Of course, you may say that the case is a cheat. You and I are not epistemic peers, because we don’t have the same evidence: you have the evidence of your calculations and I have the evidence of mine. But if this counts as a difference of evidence, then the standard example conciliationists give, that of different people splitting a bill in a restaurant, is also not a case of epistemic peerhood. And if the results of internal calculations count as evidence for purposes of peerhood, then there just can’t be any peers who disagree, and conciliationism is trivial.