I don't think I'm the first one to come up with this slightly unorthodox idea, but I can't seem to get google to show my why it is bad, nor how to do it properly.
I have a piece of code that is CPU-bound, and the second most expensive function is np.randint(...)
which is called for single numbers (one at a time). I don't have hard requirements for "true" randomness between multiple executions of the program. Hence, I thought it could be smart to precompute/cache a whole bunch (~2 million) of random numbers, save these somewhere, and then have numpy feed me those numbers as required instead of running the rng.
Could somebody please enlighten me as to why this is a bad idea, or how to do it? Thank you!
Aucun commentaire:
Enregistrer un commentaire