Consider a possible world w1 which is just like the actual world, except in one respect. In w1, in exactly a minute, I jump up with all my strength. And then consider a possible world w2 which is just like w1, but where moments after I leave the ground, a quantum fluctuation causes 99% of the earth’s mass to quantum tunnel far away. As a result, my jump takes me 100 feet in the air. (Then I start floating down, and eventually I die of lack of oxygen as the earth’s atmosphere seeps away.)
Here is something I do in w2: I jump 100 feet in the air.
Now, from my actually doing something it follows that I was able to do it. Thus, in w2, I have the ability to jump 100 feet in the air.
When do I have this ability? Presumably at the moment at which I am pushing myself off from the ground. For that is when I am acting. Once I leave the ground, the rest of the jump is up to air friction and gravity. So my ability to jump 100 feet in the air is something I have in w2 prior to the catastrophic quantum fluctuation.
But w1 is just like w2 prior to that fluctuation. So, in w1 I have the ability to jump 100 feet in the air. But whatever ability to jump I have in w1 at the moment of jumping is one that I already had before I decided to jump. And before the decision to jump, world w1 is just like the actual world. So in the actual world, I have the ability to jump 100 feet in the air.
Of course, my success in jumping 100 feet depends on quantum events turning out a certain way. But so does my success in jumping one foot in the air, and I would surely say that I have the ability to jump one foot. The only principled difference is that in the one foot case the quantum events are very likely to turn out to be cooperative.
The conclusion is paradoxical. What are we to make of it? I think it’s this. In ordinary language, if something is really unlikely, we say it’s impossible. Thus, we say that it’s impossible for me to beat Kasparov at chess. Strictly speaking, however, it’s quite possible, just very unlikely: there is enough randomness in my very poor chess play that I could easily make the kinds of moves Deep Blue made when it beat him. Similarly, when my ability to do something has extremely low reliability, we simply say that I do not have the ability.
One might think that the question of whether one is able to do something is really important for questions of moral responsibility. But if I am right in the above, then it’s not. Imagine that I could avert some tragedy only by jumping 100 feet in the air. I am no more responsible for failing to avert that tragedy than if the only way to avert it would be by squaring a circle. Yet I can jump 100 feet in the air, while no one can square a circle.
It seems, thus, that what matters for moral responsibility is not so much the answer to the question of whether one can do something, but rather answers to questions like:
How reliably can one do it?
How reliably does one think (or justifiably think or know) one can do it?
What would be the cost of doing it?