Academics Make Theoretical Breakthrough in Random Number Generation

Two University of Texas academics have made what some experts believe is a breakthrough in random number generation that could have longstanding implications for cryptography and computer security.

Two University of Texas academics have made what some experts believe is a breakthrough in random number generation that could have longstanding implications for cryptography and computer security.

David Zuckerman, a computer science professor, and Eshan Chattopadhyay, a graduate student, published a paper in March that will be presented in June at the Symposium on Theory of Computing. The paper describes how the academics devised a method for the generation of high quality random numbers. The work is theoretical, but Zuckerman said down the road it could lead to a number of practical advances in cryptography, scientific polling, and the study of other complex environments such as the climate.

“We show that if you have two low-quality random sources—lower quality sources are much easier to come by—two sources that are independent and have no correlations between them, you can combine them in a way to produce a high-quality random number,” Zuckerman said. “People have been trying to do this for quite some time. Previous methods required the low-quality sources to be not that low, but more moderately high quality.

“We improved it dramatically,” Zuckerman said.

The technical details are described in the academics’ paper “Explicit Two-Source Extractors and Resilient Functions.” The academics’ introduction of resilient functions into their new algorithm built on numerous previous works to arrive at landmark moment in theoretical computer science. Already, one other leading designer of randomness extractors, Xin Li, has built on their work to create sequences of many more random numbers.

“You expect to see advances in steps, usually several intermediate phases,” Zuckerman said. “We sort of made several advances at once. That’s why people are excited.”

In fact, academics worldwide have taken notice. Oded Goldreich, a professor of computer science at the Weizmann Institute of Science in Israel, called it a fantastic result.

“It would have been great to see any explicit two-source extractor for min-entropy rate below one half, let alone one that beats Bourgain’s rate of 0.499,” Goldreich said on the Weizmann website. “Handling any constant min-entropy rate would have been a feast (see A Challenge from the mid-1980s), and going beyond that would have justified a night-long party.”

MIT’s Henry Yuen, a MIT PhD student in theoretical computer science, called the paper “pulse-quickening.”

“If the result is correct, then it really is — shall I say it — a breakthrough in theoretical computer science,” Yuen said.

The study of existing random number generators used in commercial applications has intensified since the Snowden documents were published; sometimes random numbers aren’t so random. Low quality random numbers are much easier to predict, and if they’re used, they lower the integrity of the security and cryptography protecting data, for example. Right now, Zuckerman’s and Chattopadhyay’s result is theoretical and work remains in lowering the margins of error, Zuckerman said.

Previous work on randomness extractors, including advances made by Zuckerman, required that one sequence used by the algorithm be truly random, or that both sources be close to random. The academics’ latest work hurdles those restrictions allowing the use of sequences that are only weakly random. Their method requires fewer computational resources and results in higher quality randomness. Today’s random number systems, for example, are fast, but are much more ad-hoc.

“This is a problem I’ve come back to over and over again for more than 20 years,” says Zuckerman. “I’m thrilled to have solved it.”

Suggested articles