The Shape of Floating Point Random Numbers

[Updated 10/18/2012: Fixed a typo in which mantissa was replaced with exponent.]

Over the weekend, Viral Shah updated Julia’s implementation of randn() to give a 20% speed boost. Because we all wanted to test that this speed-up had not come at the expense of the validity of Julia’s RNG system, I spent some time this weekend trying to get tests up and running. I didn’t get far, but thankfully others chimed in and got things done.

Testing an RNG is serious business. In total, we’ve considered using four different test suites:

All of these suites can be easily used to test uniform random numbers over unsigned integers. Some are also appropriate for testing uniform random numbers over floatint-point values.

But we wanted to test a Gaussian RNG. To do that, we followed Thomas et al.’s lead and mapped the Gaussian RNG’s output through a high-precision quantile function to produce uniform random floating point values. As our high-precision quantile function we ended up using the one described in Marsaglia’s 2004 JSS paper.

With that in place, I started to try modifying my previous RNG testing code. When we previously tried to test Julia’s rand() function, I got STS working on my machine and deciphered its manual well enough to run a suite of tests on a bit stream from Julia.

Unfortunately I made a fairly serious error in how I attempted to test Julia’s RNG. Because STS expects a stream of random 0’s and 1’s, I converted random numbers into 0’s and 1’s by testing whether the floating point numbers being generated were greater than 0.5 or less than 0.5. While this test is not completely wrong, it is very, very weak. Its substantive value comes from two points:

  1. It confirms that the median of the RNG is correctly positioned at 0.5.
  2. It confirms that the placement of successive entries relative to 0.5 is effectively random. In short, there is not trivial correlation between successive values.

Unfortunately that’s about all you learn from this method. We needed something more. So I started exploring how to convert a floating point into bits. Others had the good sense to avoid this and pushed us forward by using the TestU01 suite.

I instead got lost exploring the surprising complexity of trying to work with the individual bits of random floating point numbers. The topic is so subtle because the distribution of bits in a randomly generated floating point number is extremely far from a random source of individual bits.

For example, a uniform variable’s representation in floating point has all the following non-random properties:

  1. The sign bit is never random because uniform variables are never negative.
  2. The exponent is not random either because uniform variables are strictly contained in the interval [0, 1].
  3. Even the mantissa isn’t random. Because floating point numbers aren’t evenly spaced in the reals, the mantissa has to have complex patterns in it to simulate the equal-spacing of uniform numbers.

Inspired by all of this, I decided to get a sense for the bit pattern signature of different RNG’s. Below I’ve plotted the patterns for uniform, normal, gamma and Cauchy variables using lines that describe the mean value of the i-th bit in the bit string. At a minimum, a completely random bit stream would have a flat horizontal line through 0.5, which many of the lines touch for a moment, but never perfectly match.


Signatures

Some patterns:

  1. The first bit (shown on the far left) is the sign bit: you can clearly see which distributions are symmetric by looking for a mean value of 0.5 versus those that are strictly positive and have a mean value of 0.0.
  2. The next eleven bits are the exponent and you can clearly see which distributions are largely concentrated in the interval [-1, 1] and which have substantial density outside of that region. This bit would clue you into the variance of the distribution.
  3. You can see that there is a lot of non-randomness in the last few bits of the mantissa for uniform variables. There’s also non-randomness in the first few bits for all variables. I don’t yet have any real intuition for those patterns.

You can go beyond looking at the signatures of mean bit patterns by looking at covariance matrices as well. Below I show these covariances matrices in a white-blue coloring scheme in which white indicates negative values, light blue indicates zero and dark blue indicates positive values. Note that matrices, generated using R’s image() function are reflections of the more intuitive matrix ordering in which the [1,1] entry of the matrix occurs in the top-left instead of the bottom-left.

Uniform Variables

Uniform covariance

Normal Variables

Normal covariance

Gamma Variables

Gamma covariance

Cauchy Variables

Cauchy covariance

I find these pictures really helpful for reminding me how strangely floating point numbers behave. The complexity of these images is so far removed from the simplicity of the bit non-patterns in randomly generated unsigned integers, which can be generated using IID random bits and concatenating them together.

6 responses to “The Shape of Floating Point Random Numbers”

  1. Ed Aymami

    Most of this is so far over my head you might as well be sky-diving from space.

    I do have 2 questions though.

    What is the point?
    Also, what is the conclusion?

  2. james maida

    Arcanity came to mind when stepping back as I read this article, but in reality, the subject is important and curiously mysterious. Randomness is a bit mysterious. The material is a bit complicated, but I think I got the gist of it. Consider the simplicity of randomness in unsigned integers. They are just a string of bits or digits all more or less the same in their role of defining a number. the order does have some meaning, but taken apart they all look the same. However, floating point numbers ie real numbers, are by their description harder to pin down when playing the random game. First they are not simple numbers in their representation. The string of digits in an unsigned integer are like random letters., but floating point numbers have groupings like words. There is negative part, positive part, mantissa, exponent, fractional part, etc. You can randomize the content of the pieces, but the words are still there. Order is still there. I think that is why testing randomness is harder.

  3. Rob Tillaart

    “You can see that there is a lot of non-randomness in the last few bits of the exponent for uniform variables. There’s also non-randomness in the first few bits for all variables. I don’t yet have any real intuition for those patterns.”

    As far as I understand floating points, the reason why the first bits are non random has to do with how the IEEE754 codes floating point numbers.
    The idea behind the coding scheme is that the mantissa is used as efficiently as possible to get the maximum number of significant bits in it. This implies “removing” leading zeros and adjust the exponent accordingly. This will unbalance the bit distribution.
    Furthermore, after removing the leading zero’s the first bit of the mantissa will always be 1, so that can be removed as well. The effect of ‘adding this encoding information’ in the coding scheme one looses the ‘natural’ randomness of the numbers.
    [In fact the coding scheme is more complex, see wikipedia for details]

  4. John Myles White

    Good catch, Rob. That section is actually a typo. It should say that there is non-randomness in the first and last few bits of the mantissa. Updating now.

  5. Joshua Coppersmith

    Random numbers are less mysterious to me when I remind myself that they are impossible to imagine. All we really can imagine (at least, us math mortals) are random selectors, not numbers. Consider the simple task: Give me a random integer. Well, given that the integers are unbounded either way, how could you complete that task? If you put ANY bounds on the integers you are willing to return, you basically create a box function on your distribution pattern. It’s not a computable task, and perhaps not even a human one at all.

    What you CAN do is select randomly from a pre-defined subset of the integers, or at least try to, and then test the results, as you know. Randomness is a little more like a group operation from this perspective. You could imagine, for example, that all of your subset’s integers are on a clock face and that you spin a hand on the face so that it moves an unbounded random integer number of hours. Although you can’t really imagine the size of the random spin, you do know that it has a modulus with respect to your subset, so you know it will end up pointing to some number in your subset.

    For rational numbers, which are the best ‘reals’ we can do on conventional digital computers, one ought to be able to multiply the entire problem through until everything is an integer, select the random numbers as in the previous paragraph, then divide back again. The problem is that we have loss of precision all over the place from, e.g., 10^-308 to 10^308, including complete underflow near 0. So it isn’t surprising that these lossy places are under-represented and our randomness is sacrificed. Whatever the internal representation of our numbers might be, unless it is lossless, we’ll experience this problem.

    Now consider the real ‘reals’. Because the reals are dense, even on the interval [0,1] you have a bigger problem. Your spinning hand could spend eternity and just be getting ‘deeper in between’ numbers, effectively never moving at all in the conventional sense, a la Zeno’s Paradox. Waaaayy beyond my minor in math, but I’d guess one would have to start talking about kinds of numbers and show that a certain kind of number that might have the cardinality of the reals could have a selector that would somehow result in numbers meaningfully fit to be called random. If you think about it, usually statistics deals in rationals, and even if more mathematically pure, it deals with examining presumed random events, not creating them. Wild stuff.