jeudi 24 décembre 2015

Can we reproduce the same benchmarking results for a stochastic algorithm, using the same random seed but on different machines?

I am testing a stochastic algorithm. To make the results reproducible, I plan to use the same random seed and include this seed number (an integer) together with the benchmark results when they are published.

But I have a naive question regarding the random seed. Are others with a different machine guaranteed to reproduce my results if they use the same random seed? In fact, I have little knowledge about the principle about random seeds. Admitted, many websites explain it in a more or less elaborated manner, but maybe you have some thinking on that topic to share with?

Concretely, I have a python project that is based on scipy.optimize procedures. I will use numpy.random.seed(42) for my published benchmark results, and expect others to have the same results as in my machine, as long as the same seed number is used. Does it make sense?




Aucun commentaire:

Enregistrer un commentaire