I am working on a project doing some Bayseian Statistics in Python and am using the Numpy Random Binomial function. However while putting in the keyword arguments/parameters I am a bit confused with some of the basic theory behind this. My set up is the following:
trial = np.random.binomial(n, .10, 1)
(Where n = 1000)
Problem: Say you flip a biased coin (p = .10).
Question: Is there a difference between 1000 tosses with .10 probability done once or 1 toss with .10 probably done 1000 times? Which is preferable? Is one more computationally efficient than the other?
ie. what is, if there is one, the difference between:
np.random.binomial(1000, .10, 1)
and
np.random.binomial(1, .10, 1000)
Or, phrased a different way, what is the difference between the parameters of a distribution and the shape?
I have read the Numpy Binomial Docs found here
If someone could explain the theory or basic intuition behind this that would be really helpful!
Aucun commentaire:
Enregistrer un commentaire