dimanche 13 juin 2021

Why random.randint() is much slower than random.getrandbits()

A made a test which compares random.randint() and random.getrandbits() in Python. The result shows that getrandbits is way faster than randint.

from random import randint, getrandbits, seed
from datetime import datetime

def ts():
    return datetime.now().timestamp()

def diff(st, en):
    print(f'{round((en - st) * 1000, 3)} ms')

seed(111)
N = 6

rn = []
st = ts()
for _ in range(10**N):
    rn.append(randint(0, 1023))
en = ts()
diff(st,en)

rn = []
st = ts()
for _ in range(10**N):
    rn.append(getrandbits(10))
en = ts()
diff(st,en)

The numbers range is exactly the same, because 10 random bits is range from 0 (in case of 0000000000) to 1023 (in case of 1111111111).

The output of the code:

590.509 ms
91.01 ms

As you can see, getrandbits under the same conditions is almost 6.5 times faster. But why? Can somebody explain that?




Aucun commentaire:

Enregistrer un commentaire