mercredi 26 juillet 2017

Negligible difference in performance between RDSEED and RDRAND

Recent Intel chips (Ivy Bridge and up) have instructions for generating (pseudo) random bits. RDSEED outputs "true" random bits generated from entropy gathered from a sensor on the chip. RDRAND outputs bits generated from a pseudorandom number generator seeded by the true random number generator. According to Intel's documentation, RDSEED is slower, since gathering entropy is costly. Thus, RDRAND is offered as a cheaper alternative, and its output is sufficiently secure for most cryptographic applications. (This is analogous to the /dev/random versus /dev/urandom on Unix systems.)

I was curious about the performance difference between the two instructions, so I wrote some code to compare the two. To my surprise, I find there is virtually no difference in performance. Could anyone provide an explanation?

Benchmark

/* Compare the performance of RDSEED and RDRAND.
 *
 * Compute the CPU time used to fill a buffer with (pseudo) random bits 
 * using each instruction.
 */
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <x86intrin.h>

#define BUFSIZE (1<<24)

int main() {

  unsigned int ok, i;
  unsigned long long *rand = malloc(BUFSIZE*sizeof(unsigned long long)), 
                     *seed = malloc(BUFSIZE*sizeof(unsigned long long)); 

  clock_t start, end, bm;

  // RDRAND
  start = clock();
  for (i = 0; i < BUFSIZE; i++) {
    ok  = _rdrand64_step(&rand[i]);
  }
  bm = clock() - start;
  printf("RDRAND: %li\n", bm);

  // RDSEED
  start = clock();
  for (i = 0; i < BUFSIZE; i++) {
    ok = _rdseed64_step(&seed[i]);
  }
  end = clock();
  printf("RDSEED: %li, %.2lf\n", end - start, (double)(end-start)/bm);

  free(rand);
  free(seed);
  return 0;
}

System details

  • Intel Core i7-6700 CPU @ 3.40GHz
  • Ubuntu 16.04
  • gcc 5.4.0



Aucun commentaire:

Enregistrer un commentaire