While rewriting an old Matlab code to NumPy, I noticed differences in logarithmic calculation.
In NumPy, I use np.log, Matlab uses log function.
b = [1 1 2 3 5 1 1];
p = b ./ sum(b);
sprintf('log(%.20f) = %.20f', p(5), log(p(5)))
import numpy as np
b = np.array([1, 1, 2, 3, 5, 1, 1])
p = b.astype('float64') / np.sum(b)
print(f'log({p[4]:.20f}) = {np.log(p[4]):.20f}')
For my MacBook Pro 2020 with M1 chip, I get mismatch at 16th decimal digit.
log(0.35714285714285715079) = -1.02961941718115834732 # Matlab
log(0.35714285714285715079) = -1.02961941718115812527 # NumPy
I would like to get exactly the same results. Any idea, how to modify my Python code?
logcall no. But for research, this is important. What if the imprecisions cumulate, and the results deviate even more? Imagine a deep neural network with a log-based activation function, which learns something else, when implemented in Python or Matlab.