mercredi 23 janvier 2019

Running Keras model.fit() with identical setting returns different results

I have a trained model saved in model_path, and I want to continue its training on a fixed set of data, multiple times, each time starting from the state in which it was saved. If I run the same optimization on the same fixed set of data, with explicit definition of the random seed generators for both Numpy and Tensorflow, I expect the same loss at the end of the training. I followed the instructions on the Keras FAQ on reproducible results and it does not seem to help.

My model is a stack of relus and a linear layer on top. No batch normalization or dropout. The only source of randomness may be the He weight initialization, but it doesn't really come to place since the model I load is already trained.

for i in range(3): 
    tf.set_random_seed(42)
    np.random.seed(42)
    random.seed(42)
    X = scaler.transform(df.iloc[0:150,0:12].values)
    Y = df.iloc[0:150,12].values
    model = load_model('model.h5')
    model.compile(loss='mae', optimizer='adam')
    _ = model.fit(X, Y, batch_size=150, epochs=20, verbose=0, shuffle=False)
    x_test = scaler.transform(df.iloc[150:350,0:12].values)
    y_test = df.iloc[150:350,12].values
    mae = model.evaluate(x=x_test, y=y_test, steps=test_amount//50, verbose=0)
    print('MAE: ', mae)
    K.clear_session()
    tf.reset_default_graph()

Which results in:

MAE:  12.2394380569458
MAE:  12.65652847290039
MAE:  9.243626594543457

Also, I am not running on GPU. What is cousing these differences?




Aucun commentaire:

Enregistrer un commentaire