0.1 + 0.2 is not 0.3 in Python. Here Is Why!
Dr. Ashish Bamania
I simplify the latest advances in AI, Quantum Computing & Software Engineering for you | Tech Writer With 1M+ views | Software Engineer
Let’s start with a simple comparison in Python.
0.1 + 0.2 == 0.3
What do you think this will result in?
You don’t have to be a mathematical prodigy to tell that the result is True.
But is it?
WTH! But why?
Let’s see what the result of this simple calculation in Python is.
0.1 + 0.2
Can you guess the result of this?
It is obviously 0.3.
But is it?
Where the hell did this error of 0.00000000000000004 stem from?
This article is about figuring this out so that you do not make these errors that catapult into massive bugs that will take days to sort out.
The Reason Lies In Representation
Floating point numbers are represented using the IEEE Standard 754 on computers.
This format breaks down a floating-point number into three parts:
A value of 0 means the number is positive, and a value of 1 means the number is negative.
This represents the exponent value to which the base 2 is raised.
For single-precision numbers, a bias value of 127 is added to the actual exponent to ensure it is always stored as a positive number.
The bias value for double-precision floats is 1023.
This represents the significant digits of the number, without the leading 1.
Python’s default float type uses double-precision values (64-bit).
Float To Binary Conversion Using IEEE Standard 754
Let’s convert the floating-point number 6.5 to its double-precision binary representation that Python follows, based on the IEEE 754 standard.
Step 1: Determine the Sign Bit
6.5 is a positive number, so the sign bit will be 0.
Step 2: Find the Binary Representation of the Integer Part
6 in binary is 110.
Step 3: Find The Binary Representation of the Fractional Part
0.5 in binary is 1.
Combining these, we get the binary representation for 6.5 as 110.1.
Step 4: Normalize the Number
To normalize, we express 110.1 in scientific notation: 1.101 × 22.
Here, our exponent value is 2.
领英推荐
Step 5: Calculate the Biased Exponent
For double-precision, the bias is 1023. This is added to the actual exponent.
2+1023 = 1025
1025 in binary is 10000000001.
Step 6: Determine the Fraction (Mantissa)
After normalization, the number we have is 1.101.
We drop the leading 1 and use the fractional part, 101.
As the fraction section for double-precision has 52 bits, we add tailing zeros to 101.
This, the final binary representation of the fraction/ mantissa is: 1010000000000000000000000000000000000000000000000000
Step 7: Combine Everything
Next, we combine all parts as derived above:
Finally, the double-precision binary representation for 6.5 is:
0 10000000001 1010000000000000000000000000000000000000000000000000
When one converts these floating point numbers to Binary, some of them lead to continually repeating patterns.
This is true for 0.1 and 0.2.
This is how:
0.1 in binary is 0.0001100110011001100110011001100110011001100110011...
0.2 in binary is 0.00110011001100110011001100110011001100110011001101...
Such floating numbers have to be thus approximated to be represented in Binary.
Let’s convert these binary float representations to view their decimal equivalents.
from decimal import Decimal
print(Decimal(0.1))
# Decimal('0.1000000000000000055511151231257827021181583404541015625')
print(Decimal(0.2))
# Decimal('0.200000000000000011102230246251565404236316680908203125')
When we add the binary approximations of 0.1 and 0.2, the result is the binary approximation of 0.3.
And, when this sum is rounded and converted back to the decimal system, it results in 0.30000000000000004.
Check out some more examples of these floating point arithmetic errors.
Example 1
1.001 - 1.0
# Output: 0.0009999999999998899 (as opposed to 0.001)
Example 2
total = 0.0
for _ in range(10):
total += 0.1
print(total)
# Output: 0.9999999999999999 (as opposed to 1.0)
Example 3
(0.2 ** 0.5)**2
# Output: 0.19999999999999998 (as opposed to 0.2)
How To Fix These Errors?
Little Numerical errors like these can lead to massive faults in areas such as Space & Aviation, Scientific research and Finance, to mention a few.
Here are some Python modules & libraries that you can use that will help you minimize these errors.
from decimal import Decimal
a = Decimal('0.1')
b = Decimal('0.2')
print(a + b) # This will output a precise 0.3
2. ‘Fractions’ module
from fractions import Fraction
a = Fraction(1, 10) # Represents 0.1
b = Fraction(2, 10) # Represents 0.2
print(a + b) # This will output Fraction(3, 10), which is 0.3
3. Numpy library
NumPy provides the isclose function, which can be used to check if two arrays are element-wise equal (within a tolerance limit).
This function is more reliable than using exact equality checks.
import numpy as np
np.isclose(0.1 + 0.2, 0.3)
#Ouput: True