A neuroevolution program I am in the process of debugging does not produce random values every time it is called. In the program, a vector of Network objects are initialized with the following statement:
vector<Network> population(POPULATION_SIZE, Network(sizes, inputCount));
Why I believe the program not to be converging to an optimal solution is that, always, the first 100 of the population are the same. When a network is initialized in this manner, the connection weights and neuron biases are (each) initialized with the following class function:
double randDouble(double low, double high) {
/*default_random_engine generator(std::chrono::system_clock::now().time_since_epoch().count());
uniform_real_distribution<double> distribution(low, high);
return distribution(generator);*/
/*srand(time(NULL));
double temp;
if (low > high) {
temp = low;
low = high;
high = temp;
}
temp = (rand() / (static_cast<double>(RAND_MAX) + 1.0)) * (high - low) + low;
return temp;*/
/*mt19937 rgn(std::chrono::system_clock::now().time_since_epoch().count());
uniform_real_distribution<double> gen(low, high);
return gen(rgn);*/
default_random_engine rd;
uniform_real_distribution<double> gen(low, high);
auto val = std::bind(gen, rd);
return val();
}
The 3 commented-out sections are previously attempted means of generating the functionality required. In each case, they produce the same numbers for each network (differing from 1 weight to another, but not 1 network to another). The methods attempted are based on answers from here:
- c++-default_random_engine creates all the time same series of numbers
- http://en.cppreference.com/w/cpp/numeric/random/uniform_real_distribution
In addition, the second method produces the same results with or without the seed. I must be missing something.
Aucun commentaire:
Enregistrer un commentaire