jeudi 23 février 2023

How to generate the same random number sequence within each thread

I have a code that converts an image with 32 output layers, from an AI segmentation model output, into a single layer where each pixel in each layer has a probability proportional to its score to make to this single layer. In order to do that, I need to generate a random float number to figure out each of the 32 layers is going to be the winner.

When I run this code in a single thread, it generates the same output every single time. However, when I use it with OMP (to make it faster), it generates a different output every time, even when I make the random number generator private to each thread and initialize it with the same seed (for each row). I also tried to hardcode the seed to 0 and it did not solve the problem. It is like one thread is interfering with the sequence of the numbers in the other one.

I need this code to generate consistently the same result every time in order to make it easier to test the output. Any idea?

    cv::Mat prediction_map(aiPanoHeight, aiPanoWidth, CV_8UC1);
#pragma omp parallel for schedule(dynamic, aiPanoHeight/32)
    for (int y=0;y<aiPanoHeight;++y){
        static std::minstd_rand0 rng(y);
        std::uniform_real_distribution<float> dist(0, 1);
        for (int x=0;x< aiPanoWidth;++x){
            float values[NUM_CLASSES];
            // populate values with the normalized score for each class, so that the total is 1
            float r = dist(rng);
            for (int c = 0; c < NUM_CLASSES; ++c)
            {
                r -= values[c];
                if(r<=0) {
                    prediction_map.at<uchar>(y, correctedX) = int(aiClassesLUT[c]); // paint prediction map with the corresponding color of the winning layer
                    break;
                }
            }
        }
    }




Aucun commentaire:

Enregistrer un commentaire