.@ Tony Finch – blog


I got some interesting comments about my previous notes on random floating point numbers on Lobsters, Dreamwidth, and from Pete Cawley on Twitter.

Here’s an addendum about an alternative model of uniformity.

There are 2^62 double precision floats between 0.0 and 1.0, but as I described before under “the problem”, they are not distributed uniformly: the smaller ones are much denser. Because of this, there are two ways to model a uniform distribution using floating point numbers.

Both algorithms in my previous note use a discrete model: the functions return one of 2^52 or 2^53 evenly spaced numbers.

You can also use a continuous model, where you imagine a uniformly random real number with unbounded precision, and return the closest floating point result. This can have better behaviour if you go on to transform the result to model different distrbutions (normal, poisson, exponential, etc.)

Taylor Campbell explains how to generate uniform random double-precision floating point numbers with source code. Allen Downey has an older description of generating pseudo-random floating-point values.

In practice, the probability of entering the arbitrary-precision loop in Campbell’s code is vanishingly tiny, so with some small adjustments it can be omitted entirely. Marc Reynolds explains how to generate higher density uniform floats this way, and Pete Cawley has terse implementations that use one or two random integers per double. (Reynolds also has a note about adjusting the range and fenceposts of discrete random floating point numbers.)