vendredi 25 août 2017

TensorFlow how to make results reproducible for `tf.nn.sampled_softmax_loss`

I would like to get reproducible results for my tensorflow runs. The way I'm trying to make this happen is to set up the numpy and tensorflow seeds:

import numpy as np
rnd_seed = 1
np.random.seed(rnd_seed)

import tensorflow as tf
tf.set_random_seed(rnd_seed)

As well as make sure that the weights of the neural network, that I initialized with tf.truncated_normal also use that seed: tf.truncated_normal(..., seed=rnd_seed)

For reasons that are beyond the scope of this question, I'm using the sampled softmax loss function, tf.nn.sampled_softmax_loss, and unfortunately, I'm not able to control the stochasticity of this function with a random seed.

By a look at the TensorFlow documentation of this function (http://ift.tt/2s3r3Q8), I can see that parameter sampled_values should be the only parameter that affects randomization, but I'm not able to understand how to actually use a seed.




Aucun commentaire:

Enregistrer un commentaire