A Digression On Using Floating Points
01204111 Computers and Programming Chale lermsa sak Chatdokmaip iprai
Dep epartment of f Computer Engin ineering Kasetsart Univ iversity
Cliparts are taken from http://openclipart.org Revised 2018/07/27
A Digression On Using Floating Points 01204111 Computers and - - PowerPoint PPT Presentation
A Digression On Using Floating Points 01204111 Computers and Programming Chale lermsa sak Chatdokmaip iprai Dep epartment of f Computer Engin ineering Kasetsart Univ iversity Revised 2018/07/27 Cliparts are taken from
01204111 Computers and Programming Chale lermsa sak Chatdokmaip iprai
Dep epartment of f Computer Engin ineering Kasetsart Univ iversity
Cliparts are taken from http://openclipart.org Revised 2018/07/27
2
def isroot(x): return x**2 + 3*x - 10 == 0
>>> print(isroot(2)) True >>> isroot(-3) False >>> isroot(-5) True >>> isroot(0) False Call the function to check if the given number is a root.
Define a function to do the task. In interactive mode, print() can be omitted.
3
def isroot(x): return x**2 - 0.9*x - 0.1 == 0
>>> isroot(2) False >>> isroot(-0.1) True >>> isroot(1) False Test the function. The roots should be
Define a function to do the task. Oh-O! Why false? It should be true.
4
def isroot(x): return x**2 - 0.9*x - 0.1 == 0
>>> isroot(1) False Let’s investigate why this became False when it should be True. >>> 1**2 1 >>> -0.9*1
>>> 1-0.9 0.09999999999999998 >>> 1**2 - 0.9*1 - 0.1
>>> 1**2 - 0.9*1 - 0.1 == 0 False Let’s see the value of each term when x is 1. Oh-O! This is not 0.1 as it should be. Just a close approximation. So the result is not 0 but it is
which is a good approximation of zero.
Now we know why this comparison yields False.
5
representations of numbers.
represent fractional numbers such as 0.1, -30.625, 3.1416, etc.
converted into binary, become repeating binary fractions. For example,
some numbers, e.g. 0.1, in a fixed-sized floating point representation.
Binary: 0.0001 1001 1001 1001 1001 1001 1001 1001 … Decimal: 0.1
6
0.0001 1001 1001 1001 1001 1001 …
1.1001 1001 1001 1001 … * 2-4
0 01111111011 1001100110011001100110011001100110011001100110011001
0.1000000000000000055511151231257827021181583404541015625
Let's see how the fractional decimal 0.1 stored in computers as a floating point. This is its binary equivalent, a repeating binary fraction. Converted into a normalized binary scientific notation, which in turn converted into a 64-bit floating point.
Notice that the repeating fraction has to be chopped off here to fit into 64-bit limit.
which is equivalent to this decimal number, not 0.1 but a pretty close approximation.
7
101.011
1.01011 * 22
0 10000000001 0101100000000000000000000000000000000000000000000000
Let's see how the fractional decimal 5.375 stored in computers as a floating point. This is its exact binary equivalent, having a non-repeating binary fraction. Converted into a normalized binary scientific notation, which in turn converted into a 64-bit floating point.
Chopping zeros off to fit into 64-bit limit has no effect on precision.
which is exactly 5.375 in decimal.
(Thanks goodness!)
8
0.0001 1001 1001 1001 1001 1001 …
1.1001 1001 1001 1001 … * 2-4
0 01111111011 1001100110011001100110011001100110011001100110011001
0.1000000000000000055511151231257827021181583404541015625
The discrepancy between an actual number and its approximated, rounded value is called a rounding error.
9
>>> 0.1*33.33 3.3330000000000002 >>> 33.33/10 3.3329999999999997 >>> (1-0.9)*33.33 3.3329999999999993 >>> 333.3*0.1*0.1 3.3330000000000006 >>> 333.3*(1-0.9)*(1-0.9) 3.3329999999999984 >>> 3.333*(1-0.9)/0.1*(1-0.9)*10 3.332999999999999 All these expressions should have yielded the same value 3.333 but they didn't. All these rounding errors are unsurprising results of floating-point inexactness. The more calculations, the larger rounding errors.
10
>>> 0.1*33.33 3.3330000000000002 >>> 33.33/10 3.3329999999999997 >>> (1-0.9)*33.33 3.3329999999999993 >>> 333.3*0.1*0.1 3.3330000000000006 >>> 3.33*(1-0.9)/0.1 3.3299999999999987 >>> 3.333*(1-0.9)/0.1*(1-0.9)*10 3.332999999999999 >>> Most of the time it does not (thanks heaven!), because the rounding error is usually very small (at the 15th-16th decimal places in this example).
11
a = 0.1*33.33 b = 33.33/10 c = (1-0.9)*33.33 d = 333.3*0.1*0.1 e = 333.3*(1-0.9)*(1-0.9) f = 3.333*(1-0.9)/0.1*(1-0.9)*10 print(f'{a:.6f}') print(f'{b:.6f}') print(f'{c:.6f}') print(f'{d:.6f}') print(f'{e:.6f}') print(f'{f:.6f}')
3.333000 3.333000 3.333000 3.333000 3.333000 3.333000 Also, most programs only care to print just the first few digits of the results so the rounding error is rarely visible or bothering to us
Output
12
a = 0.1*33.33 b = 33.33/10 c = (1-0.9)*33.33 d = 333.3*0.1*0.1 e = 333.3*(1-0.9)*(1-0.9) f = 3.333*(1-0.9)/0.1*(1-0.9)*10 print(a == b, a == c, a == d, a == e, a == f) print(b == c, b == d, b == e, b == f) print(c == d, c == e, c == f) print(d == e, d == f) print(e == f) print(a != b) print(c != d) print(e != f)
False False False False False False False False False False False False False False False True True True
Output
The test results are all mathematically wrong but we're not really surprised because we know why.
13
>>> 0.7+0.1 == 0.8 False >>> 0.7+0.1 == 0.6+0.2 False >>> 0.1*3 == 0.3 False >>> 0.1+0.1+0.1 == 0.3 False >>> 0.1*0.1 == 0.01 False >>> 1-0.9 == 0.1 False >>> 0.1*6 == 0.5+0.1 False >>> 0.2*3 == 0.6 False >>> 0.3*2 == 10-9.4 False >>> 1.1+2.2 == 3.3 False >>> 0.3/0.7 == 3/7 False >>> (1/10)*3 == 0.3 False >>> (1/10)*3 == 1*3/10 False >>> 3.3/10 == 3.3*0.1 False >>> 3.3/10 == 0.33 False >>> 6*0.1 == 0.6 False >>> 6*(1-0.9) == 0.6 False >>> 6*0.1 == 6*(1-0.9) False
14
def isroot(x): return x**2 - 0.9*x - 0.1 == 0
>>> isroot(2) False >>> isroot(-0.1) True >>> isroot(1) False
It tests for floating-point equality, which is dangerous. But this is wrong so the function is untrustworthy for its duty. The roots should be -0.1 and 1 As we have seen in this implementation. We're lucky that in these two cases the output is correct.
15
Thou shalt sing the Mantra of Floating-Point Equality.
It's almost always more appropriate to ask whether two floating points are close enough to each other, not whether they are equal.
which leads to the following rule of thumbs
16
For example, suppose the task we are solving needs precision
from each other not more than 0.000001 can be considered "equal" for our purpose. So we use the expression
|x - y| <= 0.000001
to test whether x and y are equal. In general, instead of using the perilous x == y as the test for equality of two floats x and y, we'd better use the expression |x - y| <= epsilon where epsilon is a positive number tiny enough for the task. y
y- y+
x
17
>>> x = 33.33/10 >>> y = (1-0.9)*33.33 >>> print(x, y) 3.3329999999999997 3.3329999999999993 >>> x == y False >>> abs(x-y) <= 0.000001 True >>> Mathematically, x and y are equal. But due to rounding errors, they become minutely different. And Python is honest enough to yield False for equality test. Now we apply the mantra "close enough means equal" to make the test result more in line with Mathematics.
18
>>> def isroot(x):
epsilon = 0.000001 return abs(x**2 - 0.9*x - 0.1) <= epsilon
>>> isroot(2) False >>> isroot(-0.1) True >>> isroot(1) True Apply the mantra here >>> 0.9-1
>>> isroot(0.9-1) True >>> 5.23*10/52.3 1.0000000000000002 >>> isroot(5.23*10/52.3) True >>> Now we're very pleased that our function works in perfect agreement with Mathematics. Such a mathematician's delight!
19
20
def isPythagorean(a, b, c): return a*a + b*b == c*c
>>> isPythagorean(3, 4, 5) True >>> isPythagorean(5, 12, 13) True >>> isPythagorean(1.5, 2, 2.5) True >>> isPythagorean(2.5, 6, 6.5) True >>> isPythagorean(3, 6, 8) False >>> isPythagorean(1.8, 2.75, 12) False
Test it
❖ Write a function to check if three given numbers a, b, and c satisfy the Pythagorean Equation:
Define a function to do the task. Ho, Ho, Ho… I'm pleased with these results.
21
>>> isPythagorean(0.33, 0.44, 0.55) False >>> isPythagorean(0.5, 1.2, 1.3) False >>> from math import pi >>> isPythagorean(5*pi, 12*pi, 13*pi) False >>> isPythagorean(8*1.1, 15*1.1, 17*1.1) False
Oh no! These are all
wrong! def isPythagorean(a, b, c): return a*a + b*b == c*c But we know by now that the cause
for floating-point equality here.
22
>>> isPythagorean(3, 4, 5) True >>> isPythagorean(5, 12, 13) True >>> isPythagorean(0.33, 0.44, 0.55) True >>> isPythagorean(0.5, 1.2, 1.3) True >>> from math import pi >>> isPythagorean(5*pi, 12*pi, 13*pi) True >>> isPythagorean(8*1.1, 15*1.1, 17*1.1) True
And the results are such a Pythagorean delight!
def isPythagorean(a, b, c): epsilon = 0.000001 return abs((a*a + b*b) - c*c) <= epsilon
Apply the mantra "close enough is equal enough"
23
24
>>> x = 1.1 + 1.2 + 1.3 >>> x 3.5999999999999996 >>> x >= 3.6 False >>> x < 3.6 True >>> >>> x = 0.1 + 0.2 + 0.3 >>> x 0.6000000000000001 >>> x <= 0.6 False >>> x > 0.6 True >>> >>> x = 1.1 + 1.2 + 1.3 >>> if x >= 3.6 : ... print('Quality passed!') ... else: ... print('Quality failed!') Quality failed! >>> >>> x = 0.1 + 0.2 + 0.3 >>> if x <= 0.6 : ... print('Cease fire!') ... else: ... print('Fire the missile!') Fire the missile! >>>
Because both <= and >= involves the test for equality as well.
25
>>> x = 1.1 + 1.2 + 1.3 >>> x 3.5999999999999996 >>> x >= 3.6 False >>> x < 3.6 True >>> >>> x = 0.1 + 0.2 + 0.3 >>> x 0.6000000000000001 >>> x <= 0.6 False >>> x > 0.6 True >>> >>> x = 1.1 + 1.2 + 1.3 >>> if x < 3.6 : ... print('Quality failed!') ... else: ... print('Quality passed!') Quality failed! >>> >>> x = 0.1 + 0.2 + 0.3 >>> if x > 0.6 : ... print('Fire the missile!') ... else: ... print('Cease fire!') Fire the missile! >>>
In this example, reversing the sense of comparisons does not help anything. So the answer is NO.
26
For any two floating points x and y, in order to test whether x is greater than or equal to y, it's usually safer to test whether
x is greater than or approximately equal to y.
One more mantra of floating-pointing comparisons That is, instead of using the expression x >= y, it’s usually safer to use
x >= y - epsilon
where epsilon is a positive number tiny enough for the task.
y y- y+
x
27
For any two floating points x and y, in order to test whether x is less than or equal to y, it's usually safer to test whether
x is less than or approximately equal to y.
One more mantra of floating-pointing comparisons That is, instead of using the expression x <= y, it’s usually safer to use
x <= y + epsilon
where epsilon is a positive number tiny enough for the task.
y y- y+
x
28
>>> x = 1.1 + 1.2 + 1.3 >>> x 3.5999999999999996 >>> x >= 3.6 False >>> x >= 3.6 – 0.00001 True >>> >>> x = 0.1 + 0.2 + 0.3 >>> x 0.6000000000000001 >>> x <= 0.6 False >>> x <= 0.6 + 0.00001 True >>> >>> x = 1.1 + 1.2 + 1.3 >>> epsilon = 0.00001 >>> if x >= 3.6 – epsilon : ... print('Quality passed!') ... else: ... print('Quality failed!') Quality passed! >>> >>> x = 0.1 + 0.2 + 0.3 >>> epsilon = 0.00001 >>> if x <= 0.6 + epsilon : ... print('Cease fire!') ... else: ... print('Fire the missile!') Cease fire! >>>
29
As we have seen, the test for “greater than” or “less than” for floating points are usually unsafe too.
>>> x = 1.1 + 1.2 + 1.3 >>> if x < 3.6 : ... print('Quality failed!') ... else: ... print('Quality passed!') Quality failed! >>> >>> x = 0.1 + 0.2 + 0.3 >>> if x > 0.6 : ... print('Fire the missile!') ... else: ... print('Cease fire!') Fire the missile! >>> >>> x = 0.1 + 0.2 + 0.3 >>> if x != 0.6 : ... print('Go to hell!') ... else: ... print('Go to heaven') Go to hell! >>>
This is because x < y, x > y, and x != y are merely the logical inverses of x >= y, x <= y, and x == y, respectively, so they produce exactly the same problems.
And “not equal to” as well.
30
Instead of using the expression x < y, it’s usually safer to use
x < y - epsilon y
y- y+
x
Instead of using the expression x > y, it’s usually safer to use
x > y + epsilon y
y- y+
x
Instead of using the expression x != y, it’s usually safer to use
abs(x - y) > epsilon y
y- y+
x x
31
>>> x = 1.1 + 1.2 + 1.3 >>> epsilon = 0.00001 >>> if x < 3.6 - epsilon : ... print('Quality failed!') ... else: ... print('Quality passed!') Quality passed! >>> >>> x = 0.1 + 0.2 + 0.3 >>> epsilon = 0.00001 >>> if x > 0.6 + epsilon : ... print('Fire the missile!') ... else: ... print('Cease fire!') Cease fire! >>> >>> x = 0.1 + 0.2 + 0.3 >>> epsilon = 0.00001 >>> if abs(x-0.6) > epsilon : ... print('Go to hell!') ... else: ... print('Go to heaven') Go to heaven! >>>
32
If either x or y (or both) is a floating-point value,
We want to make the comparison We should use this safer comparison x == y abs(x – y) <= epsilon x != y abs(x – y) > epsilon x <= y x <= y + epsilon x > y x > y + epsilon x >= y x >= y - epsilon x < y x < y - epsilon
where epsilon is a positive number small enough for the current task.
33
def EQ(x, y, epsilon): # approximately equal return abs(x-y) < epsilon def GTEQ(x, y, epsilon): # greater than or approximately equal return x >= y - epsilon def LTEQ(x, y, epsilon): # less than or approximately equal return x <= y + epsilon def NEQ(x, y, epsilon): # not approximately equal return not EQ(x, y, epsilon) def LT(x, y, epsilon): # definitely less than return not GTEQ(x, y, epsilon) def GT(x, y, epsilon): # definitely greater than return not LTEQ(x, y, epsilon)
34
def isPythagorean(a, b, c): epsilon = 0.000001 return EQ(a*a + b*b, c*c, epsilon)
epsilon = 0.00001 def solve_and_output(a, b, c): discriminant = b*b - 4*a*c if GTEQ(discriminant, 0, epsilon) : # has real roots compute_real_roots(a, b, c) else: # has complex roots compute_complex_roots(a, b, c) a safer substitute for a*a + b*b == c*c a safer substitute for discriminant >= 0
35
36
Floating-Point Arithmetic
3568/ncg_goldberg.html
37