I am writing a parallelized julia code for monte carlo simulations. This requires me to generate random numbers in parallel on different cores. In a simple test code on my workstation, I tried to generate random numbers on 4 cores and got the following results:
julia -p 4
julia> @everywhere using Random
julia> @everywhere x = randn(1)
julia> remotecall_fetch(println,1,x[1])
-1.9348951407543997
julia> remotecall_fetch(println,2,x[1])
From worker 2: -1.9348951407543997
julia> remotecall_fetch(println,3,x[1])
From worker 3: -1.9348951407543997
julia> remotecall_fetch(println,4,x[1])
From worker 4: -1.9348951407543997
I do not understand why the numbers fetched from different processes give exactly the same result. I am not sure what the mistake is. My understanding is that using the @everywhere macro lets you run the same piece of code on all the processes in parallel. I am currently julia 1.6.0 on my computer. Thank you.
Aucun commentaire:
Enregistrer un commentaire