vendredi 24 avril 2020

Why does rand() repeat numbers far more often on Linux than Mac?

I was implementing a hashmap in C as part of a project I'm working on and using random inserts to test it when I noticed that rand() on Linux seems to repeat numbers far more often than on Mac. RAND_MAX is 2147483647/0x7FFFFFFF on both platforms. I've reduced it to this test program that makes a byte array RAND_MAX-long, generates RAND_MAX random numbers, notes if each is a duplicate, and checks it off the list as seen.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>

char randoms[RAND_MAX];
int dups = 0;

int main() {
    memset(randoms, 0, RAND_MAX);
    srand(time(0));
    for (int i = 0; i < RAND_MAX; i++) {
        int r = rand();
        if (randoms[r]) {
            // printf("duplicate at %d\n", r);
            dups++;
        }
        randoms[r] = 1;
    }
    printf("duplicates: %d\n", dups);
}

Linux consistently generates around 790 million duplicates. Mac consistently only generates one, so it loops through every random number that it can generate almost without repeating. Can anyone please explain to me how this works? I can't tell anything different from the man pages, can't tell which RNG each is using, and can't find anything online. Thanks!




Aucun commentaire:

Enregistrer un commentaire