1) I make confusions between what wa call a "sampler". From what I understand, a sampler allows to generate a distribution of points that follows a known PDF (probability distribution function), doesn't it ?
2) On the other side, it exits Monte-Carlo method which allows for example to compute the number Pi by generating random values (x_i,y_i) and see if x_i^2+y^2 < R^2. The ratio between accepted points and total points generated will an estimation of Pi**.
3) Moreover, I have used previously the Metropolis-Hasting in this simple form, i.e to generate a distribution of points with a known PDF. But I have also seen that we can use it to do estimation of parameters of a model : at which level can we distinguish the "Sampler" Metroplis-Hasting from "Estimation of paramters" method.
4) For example, there is also the acceptance method (called also Von Neumann mathod), very used in Nuclear physics, which generates also a distribution from a known PDF : can it be qualified also of "sampler" ?
5) Finally, the Markov chain coupled wiht Monte Carlo (MCMC) is a pure method to estimate the parameters of amodel given the data : what is the respective role of Monte_Carlo and the Markov chain in this method.
To summarize, I show you below the problematic in which I am : it is about Forecast in astrophysics. In this post, I am talking about "Inverse problem" in Physics, i.e, we don't predict the data from a very accurate theorical model but I want to estimate the parameters of my theorical model given the data I have from experiment or from simulated data (what's we call fake data). The using of Bayes theorem is very practical in this kind of approach since we have a relation of proportionality between the posterior (probability of parameters given the data) and the likelihood (product of PDF taken at data values given the parameters model).
6) Fisher formalism is very usefull for estimation of the standard deviation compared to the fiducial values but we need to know these fiducial values before and second point, we have to assume that posterior distribution is always gaussian, haven't we ? (or that likelihood is gaussian, I don't remenber ... if someone could indicate this assumption).
So as you have seen, there are a lot of concept to integrate for me and I would like to convert this mess into ordered things.
First, I would like to make the difference between a "sampler" and an estimator method. After, any remark is welcome to clarify my confusions. If some of you have difficulties to explain all these differences, I would be glad to start a bounty.
Remark: I have asked different questions but all of them is linked in the sense they use random processes to generate distribution or evaluate parameters (by the Bayes theorem).
Thanks in advance, I hope that I have been clear enough. Regards