The .NET reference source shows the implementation of NextBytes()
as:
for (int i=0; i<buffer.Length; i++)
{
buffer[i]=(byte)(InternalSample()%(Byte.MaxValue+1));
}
InternalSample
provides a value in [0, int.MaxValue), as evidenced by it's doc comment and the fact that Next()
, which is documented to return this range, simply calls InternalSample
.
My concern is that, since InternalSample
can produce int.MaxValue
different values, and that number is not evenly divisible by 256, then we should have some slight bias in the resulting bytes, with some values (in this case just 255) occurring less frequently than others.
My question is:
- Is this analysis correct or is the method in fact unbiased?
- If the bias exists, is it strong enough to matter for any real application
FYI I know Random
should not be used for cryptographic purposes; I'm thinking about it's valid use cases (e. g. simulations).
Aucun commentaire:
Enregistrer un commentaire