vendredi 8 juin 2018

Unable to reproduce same values in multiple runs on Tensorflow

In my project for training a CNN model for classification, I have the issue of randomness in terms of results.

I have set seed for both Numpy and Tensorflow at the start of session.

I also have checked and made sure all the initialisations [i.e, data shuffle, weights initialisation for all layers are same for every run] Anyways the results [i.e, cost, accuracy] vary for every run.

Also the weights of all layers vary after one epoch.

This is the cost function I'm using

cost=tf.reduce_mean(tf.losses.hinge_loss(logits = pred, labels = labels),name=name)

These are the other functions I use for calculating Accuracy where Z is the output from last layer of the model

pred=tf.argmax(Z,axis=1,name="predictions")
true_preds=tf.equal(pred,tf.argmax(Y,axis=1))
accuracy=tf.reduce_mean(tf.cast(true_preds,"float"),name="Accuracy")

So even when I run the model for only one epoch and even then the results are not repeated. But the initialisations are all consistent just to reiterate. Also I run my code on only one GPU and no parallelism is done.

Have gone through similar questions but could'nt find a solution, please direct if i'm missing any such solved question.




Aucun commentaire:

Enregistrer un commentaire