I'm running a cosimulation and setting the seed at the start of the program. I'm drawing from a joint lognormal distribution. This is my function:
def get_new_EV(numEVs):
# numEVs is the number of EVs to return to the main program
lvl2 = np.random.lognormal(np.random.normal(5,1),np.random.uniform(0,2),1)
lvl1 = np.random.lognormal(np.random.normal(3,1),np.random.uniform(0,10),1)
lvl3 = np.random.lognormal(np.random.normal(2,1),np.random.uniform(0,.1),1)
total = lvl1+lvl2+lvl3
#print(lvl1,lvl2,lvl3,total)
p1,p2,p3 = lvl1/total,lvl2/total,lvl3/total
#print(p1,p2,p3)
listOfEVs = np.random.choice([1,2,3],numEVs,p=[p1[0],p2[0],p3[0]]).tolist()
numLvl1 = listOfEVs.count(1)
numLvl2 = listOfEVs.count(2)
numLvl3 = listOfEVs.count(3)
return numLvl1,numLvl2,numLvl3,listOfEVs
In the main program, I execute this:
if __name__ == "__main__":
random.seed(1)
print('test seed: ',get_new_EV(2))
The output of running the program two times is this:
test seed: (0, 0, 2, [3, 3])
test seed: (2, 0, 0, [1, 1])
I don't understand -- the seed is the same. Shouldn't the output be the same? Isn't that the point of random.seed() ?
Aucun commentaire:
Enregistrer un commentaire