dimanche 1 avril 2018

Random grid search for hyperparameter optimization

I am afraid that this question might be too simple, but I was not able to find any proof that this is actually the way of doing it, without relying on Python libraries such as Scikit and others.

I am talking about implementing random grid search for the tuning of a Neural Network's hyperparameters. Let's say that I want to try combinations of 2 parameters (a,b) taking three values each. In order for them both being random all three times, the way of doing this would be to:

1) Take three randomly distributed samples for hyperparameter "a"

2) Take three randomly distributed samples for hyperparameter "b"

3) Combine them and get 3 different random training hyperparameter "sets".

Would I have then, as many "sets" as values there are in the range of the hyperparameters? So If I want to get 20 different sets of hyperparameters, the range of the hyperparameters should take 20 samples each.

Is this the correct way of doing it? Or am I missing some more complicated logic behind?

Thanks!




Aucun commentaire:

Enregistrer un commentaire