It looks like running the sklearn MLPclassifier with the same input on different devices will give different accuracy results, even if a global seed is set.
MWE:
import numpy as np
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
np.random.seed(1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, stratify=y, random_state=np.random.RandomState(0))
nn = MLPClassifier(hidden_layer_sizes=(100,100),
activation='relu',
solver='adam',
alpha=0.001,
batch_size=50,
learning_rate_init=0.01,
max_iter=1000,
random_state=np.random.RandomState(0))
nn.fit(X_train, y_train)
y_train_pred = nn.predict(X_train)
acc_train = np.sum(y_train == y_train_pred, axis=0) / X_train.shape[0]
y_test_pred = nn.predict(X_test)
acc_test = np.sum(y_test == y_test_pred, axis=0) / X_test.shape[0]
results.append([acc_train,acc_test])
How can reproducibility be guaranteed (independent of the executing device)?
Aucun commentaire:
Enregistrer un commentaire