lundi 23 novembre 2020

Why doesn't setting random seed give same performance across runs?

I am training some deep learning models using pytorch, which also includes the usage of numpy. Since the randomisation is not truly random and it is pseudo-random, why aren't the numbers i.e. accuracy etc same across different runs?

I mean, even if I do not set some random seed, there should be some default random seed according to which my code must run and give same results across different runs. Is there something more to it?

Please do let me know if something is not clear.

Thanks,

Megh




Aucun commentaire:

Enregistrer un commentaire