I was playing with urandom
based implementation of random.choice
and I'm wondering what people here would think about where I got and where I'm going. First some context.
My first way to utilize urandom
to make a uniformly random selection is to
- Calculate the number of bits required to represent the indices of the collection.
- Generate that number of random bits until the value represented doesn't exceed the size of the collection.
Which is what I do in the following code:
import math
import os
def urandSelect(chrs):
bits = math.ceil(math.log2(len(chrs)))
idx = len(chrs)
while idx>=len(chrs):
idx = int("".join([bin(b).split("b")[1] for b in os.urandom(math.ceil(bits/8))][-bits:]),2)
return chrs[idx]
Now here's the rub; worst case runtime. Hypothetically this could generate a random idx>=len(chrs)
any number of times before actually returning. So while the expected runtime is reasonable the worst case runtime is infinite.
So my question is this: how would you go about making a uniform random selection with a bound worst case runtime using urandom
as your RNG?
A side note, the idea of just generating a large number of random bytes and dividing down into the range required to index the collection has been suggested but the non-uniformity arising from 2^nbits % len(chrs) != 0
is a deal breaker.
Aucun commentaire:
Enregistrer un commentaire