I've got a program that simulates events happening. I have the following constraints:
- The distribution should approximate a Poisson distribution, i.e., independent events occurring at a fixed rate.
- At time 1000 ticks, the event should have happened 100 times (on average). (1000 and 100 are placeholders; the real numbers are determined experimentally from the real-world system I'm trying to model.)
I'm currently using code of the following form:
def tick():
doMaintenance()
while random() < 0.1:
eventHappens()
I use a while
instead of an if
to simulate the idea that these events are independent. Otherwise, there would be no chance of two events occurring in the same tick. However, I suspect that means that the random() < 0.1
(where random
returns a number in the half-open range [0.0, 1.0)
) is slightly inaccurate. (It's okay that I'm quantizing the event occurrence time.)
Can somebody suggest the correct random() < f
constant to use if I want (in the general case) a Poisson distribution such that at time t
there will be event count e
? I believe that such a constant f
exists, but its derivation is not obvious to me.
I'm putting this in stackoverflow.com
so I can conveniently talk in coding terms and because I'm using a tick-tick simulation which is more familiar to numerical simulation programmers than mathematicians. If this is something more appropriate in math.stackexchange.com
, though, let me know.
Aucun commentaire:
Enregistrer un commentaire