Consider the following code:
x = np.array([1, 5, 6, 10]) # an unstructured coordinate
f = x**2 # function value on the points x
grad1 = np.gradient(f, x) # df/dx
grad2 = np.gradient(f) / np.gradient(x) # df/di * di/dx = df/dx
I would have expected that, by chain rule, grad1=grad2. The i in the comment above is simply a uniform "index"After testing, this equality is true for simple linear functions, but not e.g. x**2 as shown above. I'm now wondering if there is a theoretical reason why the chain rule shouldn't hold in general for derivatives estimated by finite differences.
I think the problem lies in the follow observation:
np.gradient does not, in general, assume the input coordinates x to be uniform. But I think this expression of the chain rule does, which I suspect is implicit in the call np.gradient(x). When we call np.gradient(f, x) with nonuniform x, we are really performing an interpolation for each interior point, rather than a true centered-difference...