jeudi 28 mars 2019

Convergence to different results for an optimization

i am using the PyOpt module to solve a problem of convex optimization. The optimization always gives me a result and the value to which it converges looks like it is minimizing my target function, but for different runs of my code i get different solutions. My problem is convex but not strictly convex, so I'd expect the existence of different solutions, but since the starting point of my algorithm is basically the same for the two runs I was wondering if this could be due to some random procedure in the algorithm I am using. I am using the slsqp algorithm, does anybody know if it uses any random procedure?




Aucun commentaire:

Enregistrer un commentaire