vendredi 12 octobre 2018

How to get random numbers after loading a tf model?

I have a tf model that looks like this:

def model(x):

    z_mean = ... 
    z_log_stddev = ... # results of a few fully connected layers

    eps = tf.random_normal(shape=tf.shape(z_log_stddev),
                           mean=0,
                           stddev=1,
                           dtype=tf.float32)

    with tf.name_scope('z'):
        z = z_mean + tf.exp(z_log_stddev) * eps

    ... # define loss, optimizer, result variable I am interested in, etc.

After training it, I saved it with the following code:

saver = tf.train.Saver()

saver.save(sess, PATH)

For my application, I want to load the model and run the same test sample through the network multiple times (let's assume 5 times). Because the result depends on a randomly generated variable, I expect to get 5 slightly different results. However, most of the time I get the same result on every run, except the last. I tested this with this code:

result_list = []

for i in range(5):

    tf.reset_default_graph()
    sess = tf.Session()

    loader = tf.train.import_meta_graph('PATH/TO/GRAPH')

    loader.restore(sess, tf.train.latest_checkpoint(PATH))

    graph = tf.get_default_graph()

    x = graph.get_tensor_by_name('x:0')
    result = graph.get_tensor_by_name('BiasAdd:0')

    result_list.append(sess.run(result, feed_dict={x:TEST_DATA}))

print(result_list)

In short, I load the model every time before injecting my test data into it, so that the random variable gets initialized before each evaluation. But obviously, this is not the case as I get the same numbers.

I did NOT use seeds anywhere! I suspect that a seed is saved with the model.

Question: how do I make sure that the random variable is different on each run, when loading a model?




Aucun commentaire:

Enregistrer un commentaire