dimanche 21 avril 2019

Does downsampling a random sequence make it less random? Is there a principle/theorem that shows this?

I'm wondering whether downsampling a random (or psuedorandom) sequence makes it less random or preserves its randomness. For example, if you take a series of psuedorandom bytes, as shown in the code below, and throw out all but the alphanumeric characters, is the resulting string of alphanumeric characters still psuedorandom? What about for a random case?

Is there a mathematical or computing principle or theorem that shows this one way or the other?

I looked at this question: Is a subset of a random sequence also random?

But this does not cover specifically a selection process that includes knowledge of the values that are being selected. The answer by MusiGenesis seems to say that this might cause less randomness.

// Open the /dev/urandom file to read random bytes
ifstream rand_file("/dev/urandom");

if (!rand_file) {
    cout << "Cannot open /dev/urandom!" << endl;
    return return_code::err_cannot_open_file;
}

string password("");
vector<char> rand_vec(rand_vec_length, 0);
while (password.length() < pwd_length) {
     fill_rand_vec(rand_vec, rand_file);

    // Iterate through the vector of psuedo-random bytes and add 
    // printable chars to the password
    for (auto rand_char : rand_vec) {
        if (isprint(rand_char) && !isspace(rand_char)) {
            password += rand_char;
        }

        if (password.length() >= pwd_length) {
            break;
        }
    }
}




Aucun commentaire:

Enregistrer un commentaire