vendredi 21 septembre 2018

Is there an elegant and efficient way to implement weighted random choices in golang? Details on current implementation and issues inside

tl;dr: I'm looking for methods to implement a weighted random choice based on the relative magnitude of values (or functions of values) in an array in golang. Are there standard algorithms or recommendable packages for this? Is so how do they scale?

Goals

I'm trying to write 2D and 3D markov process programs in golang. A simple 2D example of such is the following: Imagine one has a lattice, and on each site labeled by index (i,j) there are n(i,j) particles. At each time step, the program chooses a site and moves one particle from this site to a random adjacent site. The probability that a site is chosen is proportional to its population n(i,j) at that time.

Current Implementation

My current algorithm, e.g. for the 2D case on an L x L lattice, is the following:

  • Convert the starting array into a slice of length L^2 by concatenating rows in order, e.g. cdfpop[i L +j]=initialpopulation[i][j].
  • Convert the 1D slice into a cdf by running a for loop over cdfpop[i]+=cdfpop[i-1].
  • Generate a two random numbers, Rsite whose range is from 1 to the largest value in the cdf (this is just the last value, cdfpop[L^2-1]), and Rhop whose range is between 1 and 4. The first random number chooses a weighted random site, and the second number a random direction to hop in
  • Use a binary search to find the leftmost index indexhop of cdfpop that is greater than Rsite. The index being hopped to is either indexhop +-1 for x direction hops or indexhop +- L for y direction hops.
  • Finally, directly change the values of cdfpop to reflect the hop process. This means subtracting one from (adding one to) all values in cdfpop between the index being hopped from (to) and the index being hopped to (from) depending on order.
  • Rinse and repeat in for loop. At the end reverse the cdf to determine the final population.

This process works really well for simple problems. For this particular problem, I can run about 1 trillion steps on a 1000x 1000 lattice in about 2 minutes on average with my current set up, and I can compile population data to gifs every 10000 or so steps by spinning a go routine without a huge slowdown.

Where efficiency breaks down

The trouble comes when I want to add different processes that have real-valued coefficients. So say I now have a hopping rate at k_hop *n(i,j) and a death rate (where I simply remove a particle) at k_death *(n(i,j))^2. There are two slow-downs in this case:

  • My cdf will be double the size (not that big of a deal). It will be real valued and created by cdfpop[i*L+j]= 4 *k_hop * pop[i][j] for i*L+j<L*L and cdfpop[i*L+j]= k_death*math. Power(pop[i][j],2) for L*L<=i*L+j<2*L*L, followed by cdfpop[i]+=cdfpop[i-1]. I would then select a random real in the range of the cdf.
  • Because of the squared n, I will have to dynamically recalculate the part of the cdf associated with the death process at each step. This is a MAJOR slow down, as expected. Timing for this is about 3 microseconds compared with the original algorithm which took less than a nanosecond.

This problem only gets worse if I have rates calculated as a function of populations on neighboring sites -- e.g. spontaneous particle creation depends on the product of populations on neighboring sites. While I hope to work out a way to just modify the cdf without recalculation by thinking really hard, as I try to simulate problems of increasing complexity, I can't help but wonder if there is a universal solution with reasonable efficiency I'm missing that doesn't require specialized code for each random process.

Thanks for reading!




Aucun commentaire:

Enregistrer un commentaire