joubin jabbari
  • home
  • blog
  • work
  • reading
  • photography
  • contact
  • resume

Respect your elders

Javascript

[email protected]:~$ node
> .1+.2-.3
5.551115123125783e-17
> 1/(.1+.2-.3)
18014398509481984

Whoops, Javascript thinks .1+.2-.3 = 5.551115123125783e-17 I guess thats not so bad. I means its really really really small, 000000000000000005551115123125783, right?

well not really, because 0 is special. Otherwise, we would live a world where 1/0 = 18014398509481984.

Python

> [email protected]:~$ python
Python 2.7.8 (default, Dec  6 2014, 13:17:43) 
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.56)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> .1+.2-.3
5.551115123125783e-17
>>> 1/(.1+.2-.3)
1.8014398509481984e+16

[email protected]:~$ python3
Python 3.2.3 (default, Feb 27 2014, 21:31:18) 
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> .1+.2-.3
5.551115123125783e-17
>>> 1/(.1+.2-.3)
1.8014398509481984e+16

Python did a little bit better. It still got .1+.2-.3 = 5.551115123125783e-17 wrong. Because .1+.2-.3 is still 0.

and it got 1/0 = 18014398509481984 wrong since you still can't divide by 0. Python didnt do much better either, 1/5.551115123125783e-17 = 1.8014398509481984e+16.

Java

public class Zero{
    public static void main(String args[]){
        System.out.println((.1+.2-.3));
        System.out.println(1/(.1+.2-.3));
    }
}

output

5.551115123125783E-17                                                      
1.8014398509481984E16                                                      

Whoops. I got to be honest. When I first started writing this post, I did not expect that java had the same issue.

C

int main(){
    printf("%f\n",(.1+.2-.3));
    printf("%f\n",(1/(.1+.2-.3)));
}
output

0.000000
18014398509481984.000000

Wait, what, how is 1/0 not undefined.

C (again)

int main(){
        printf("%g\n",(.1+.2-.3));
        printf("%g\n",(1/(.1+.2-.3)));
}
output

5.55112e-17
1.80144e+16

:'(

A little background

I first found a post while browsing reddit's /r/programming sub and came accross this. At first, I thought to myself, how intresting, I did a bunch of crypto stuff in python when I was in school and I wonder if I have rounding issues, and not that it matters but I didnt. Anyways, I came back and did a bunch of tests to findout what languages have issues. The truth is, they all do. Even Perl cheats, they dont zero.

Without getting too "theoretical" numbers are imagenary. Even real numbers and what is easy for a human to read/undrestand isnt always true for a computer.

The "real" truth is that decimal numbers are actually more error prone that floating point numbers as @Veedrac points out in his post.


December 10 2015

Joubin Jabbari | Github | Twitter