While I should confess upfront that I’ve always had a weaker command of the details of floating point arithmetic than I feel I ought to have, this sort of thing still blows my mind when I stumble upon it. These moments invariably make me realize that floating point math will simply never satisfy my naive hopes as a mathematician:

1 2 3 | 0.1 + 0.1 == 0.2 # True 0.1 + 0.1 + 0.1 == 0.3 # False 0.1 + 0.1 + 0.1 + 0.1 == 0.4 # True |

On my Intel Core 2 Duo machine running OS X, those statements have the indicated truth values in all three of Julia, R and Python.

Consider this evidence for the truth of the combined propositions, “God created the integers. All else is the work of man,” and “Out of the crooked timber of humanity no straight thing was ever made.”

Yep, binary representations of decimal numbers are weird. You probably know that Julia/R/Python have the same result because all are leveraging the IEEE floating-point logic that Intel has built into their chips since I think the mid to late 1980s. Julia has support for Rational types (fractions), and I’m sure Python has a library for that too. Both of those would get the equality right. And there are floating-point “approximately equal” methods in most languages. R has all.equal() with a default tolerance that’s good enough:

> all.equal(0.1 + 0.1 + 0.1, .3)

[1] TRUE

In the abstract, I do know that. But when it comes up in practice, I’m always disappointed that floating point numbers don’t obey the rules I want them to obey.

I’m pretty sure I had this exact experience 10 years ago and felt equally unhappy about it.

What’s the approximate equality test in Julia? In my code I’ve been doing things like abs(a – b) < 10e-16, but it would be nice to know what the "right" way is.

Oh, in Julia it’s approx_eq(a,b). Strangely, it’s not in the documentation anywhere, I don’t think. I learned about it while looking at the test files to work on my (still half-finished) testing framework. I’ll add it to the docs…

Thanks! It’s amazing how much functionality there is in Julia that I don’t know about simply because it’s not been documented yet.

Ack! As it turns out, there isn’t an approx_eq() in base Julia! It is defined, however, in extras/test.jl. Now that I think about it, I’m going to submit a pull request to put that function in base. I’m also going to redefine it so that the tolerance is, by default, twice the max uncertainty of the numbers being compared: 2max(eps(a), eps(b)), using the handy eps() function that _is_ defined in base. I’d previously borrowed R’s fixed 1e-6, which is probably not a great idea.

Sorry about the confusion!

MATLAB also handles it like the three mentioned (R, julia, python).

Mathematica (Wolfram) gets it right …

I guess Maple will also manage … though I haven’t tested it there …

Ahah, to my surprise, there’s an isclose() function in extras/nearequal.jl! As you said, who knew?

They really gotta get documentation for extras straightened out.

God said: “you shall never ever use == on floating point numbers”

This floating point behavior is slightly less shocking if your language doesn’t lie to you when printing floats:

R> 0.1 + 0.2

[1] 0.3

Matlab> 0.1 + 0.2

ans =

0.3000

julia> 0.1 + 0.2

0.30000000000000004

In R and Matlab, the result of 0.1+0.2 *looks* like exactly 0.3 even though it’s not. Julia uses the excellent double-conversion library [http://code.google.com/p/double-conversion/] to efficiently print the shortest decimal representation that will exactly reproduce a float value. This is a surprisingly hard problem that has only recently been satisfactorily solved by Florian Loitisch [http://florian.loitsch.com/publications/dtoa-pldi2010.pdf], who also wrote the double-conversion library.

Stefan, fascinating that it was solved so recently! Have printed that article and will read it soon!