In my python code, I am using the decimal module as I need a very high level of precision.
I have variables a, b, and c, where c = a / b.
I asked python to print out the following:
print "a: %.50f:" % (a),type(a)
print "b: %.50f:" % (b),type(b)
print "c: %.50f:" % (c),type(c)
which produced.
a: 0.00000006480000292147666645492911155490567409742653: <class 'decimal.Decimal'>
b: 43200001.94765111058950424194335937500000000000000000000000 <class 'decimal.Decimal'>
c: 0.00000000000000149999999999999991934287350973866750 <class 'decimal.Decimal'>
This is all fine, but then to check the value for c, I went into Python's interactive mode and copied and pasted those numbers directly:
a = decimal.Decimal('0.00000006480000292147666645492911155490567409742653')
b = decimal.Decimal('43200001.94765111058950424194335937500000000000000000000000')
The when I ask for a / b, I get:
Decimal('1.500000000000000013210016734E-15')
which is a slightly different result! I know that the floating point issues can make small imprecisions like this occur, but that's why I have been using the decimal module.
Can anyone tell me where the difference in these two answers is coming from please?
decimalmodule allows you to modify the precision of the numbers using thedecimal.setcontext()function. Are you using this to modify the default precision? If yes, what's the prevision that you are using?