I'm currently using xorshift128+ in my project, which I know passes Big Crush and is thought to produce rather high quality random numbers for its speed. However, it produces 64 bit numbers, and the vast majority of random numbers I require are small integers (between 0 and, say, 100 or so).
Now, using % to reduce the 64 bit random number to the desired range results in a non-uniform distribution (some numbers appear more times than others), and in the case of powers of 2 entirely throws away most of the bits. The method where you generate numbers until something is in-range results in a more uniform distribution, but it's somewhat problematic with small numbers, and it felt silly to generate more bits when I already have way more than necessary to begin with.
Hence, I implemented a system that takes the minimum amount of bits necessary (looking for the closest power of 2, e.g. if I need a range of 0-95 I'll take 7 bits (2^7 = 128) and keep generating 7 bits until I get something under 95, which should always have a probability over 50%, as otherwise I could just use one bit less)
Anyway, the system is in place, and rudimentary statistical tests suggest it's working as expected, plus running blazing fast. However, I have been unable to run TestU01 on the modified system (there doesn't seem to be built in support for dynamic bit sizes) and the original papers have been a bit too dense for me to get through.
Basically, I'm wondering if passing Big Crush both forwards and backwards, as xorshift128+ is purported to do, strongly suggests that every individual bit is satisfactorily random and using them separately should be fine, or if I could be setting myself up for trouble. Plus, optionally, any test suites that would allow me to empirically verify the statistical quality of my generator.
Aucun commentaire:
Enregistrer un commentaire