Talk:Metropolis–Hastings algorithm


x prime or x_t?

edit

Here are two conflicting comments on the case a>1, which I have cut and pasted for readability. --Rinconsoleao 15:12, 16 July 2007 (UTC)Reply

There seems to be a typo in the last display. If a>1, we would accept the proposal, x', not x_t. --Youyifong 06:38, 28 July 2006 (UTC)Reply
Hi, is there a mistake here? In the step-by-step instructions,
 
(Postdoc 02:30, 16 July 2007 (UTC))Reply

The algorithm always accepts if a>1. That is,   when a>1. Notice that this is consistent with the statements about the uniform random variable   in the previous paragraph. I have rewritten the "step-by-step instructions" to make this clearer. --Rinconsoleao 15:12, 16 July 2007 (UTC)Reply

Please check the page again. The pages mis-typed x' as x_t, as Youyifong mentioned again. Also, should it be "a >= 1" ? (Postdoc 05:38, 30 July 2007 (UTC))Reply
Thanks! --Rinconsoleao 15:44, 31 July 2007 (UTC)Reply

Gibbs sampling

edit

The article says that the Gibbs-sampler is a special case of the MH-algorithm. As far as I know it is a complementary MCMC-method which works quite differently. --Smeyen 01:07, 24 February 2007 (UTC)Reply


In Gibbs sampler, its proposal distribution is the full conditional distribution of some part of parameter space conditional on the rest, which always resultes in a acceptance probability of 1. See [http:://www.stat.purdue.edu/~lebanon/notes/metropolis.pdf] (Postdoc 05:46, 30 July 2007 (UTC))Reply

Gibbs sampling is a special case of the Metropolis-Hastings sampler, with the a proposal distribution that at each iteration, updates one element of the parameter vector from the distribution of that element conditional on the current value of all of the others:  , resulting in an acceptance probability that is always 1, and hence is always accepted. Proof left as an exercise for the reader, ☺. Gregory R. Warnes (talk) 23:27, 27 October 2017 (UTC)Reply

Complexity

edit
The Markov chain is started from a random initial value \displaystyle x^0 and the algorithm is run for many iterations until this initial state is "forgotten"

no bounds are known on the number of needed iterations, e.g. the mixing time of the Markov chain? Can we really claim in the introduction that this algorithm allow to compute something if we don't known how long it should be iterate? Levochik (talk) 07:43, 26 August 2009 (UTC)Reply

Example

edit

First of all, thank you for this article. It's the best I could find in the web about Monte Carlo simualtion. But unfortunately, I still don't get it. How can I compute a=a_1a_2 if I don't know P(x)? Perhaps additionally, you could include an example. Something where one can see the step-by-step instruction applied to a simple example. Just like a record of a view steps from a real simulation of some simple case. —Preceding unsigned comment added by Askamaster (talkcontribs) 10:30, 25 March 2010 (UTC)Reply

You don't need P(x), you only need a function proportional to P(x), as the normalizations cancel in the ratio. It is quite often the case that you can easily determine a function proportional to P(x), but the integral to normalize it is very difficult to perform. This comes up quite often in Bayesian computations of the posterior distribution. Rjljr2 (talk) 15:49, 11 April 2010 (UTC)Reply
article is NOT good, is NOT clear, you want improvement?

best part is who created it then explain why, what, when, are no answered I will give you it tells how does it give algorithm in detail so one can code it? nope, does it provide example? noJuror1 (talk) 10:28, 30 November 2017 (UTC)Reply


The article suffers a bit from the Phd-on-wikipedia-syndrome. It descibes a fairly simple idea, but it tries to be very general and that makes it hard to read for people that don't know the subject beforehand. Compare with the article on Simulated_annealing for an simpler article on the same subject. nielsle — Preceding unsigned comment added by 89.23.239.219 (talk) 08:17, 13 January 2018 (UTC)Reply

Proposal density

edit

Clearly the Markov chain doesn't converge for all proposal densities Q. Certainly Q > 0 is sufficient, but I don't believe it is necessary. What condition is required for the proposal density? — Preceding unsigned comment added by WuTheFWasThat (talkcontribs) 19:24, 16 May 2012 (UTC)Reply

Overview

edit

The section Overview tries to give a formal derivation of the Algorithm. The foregoing sections are so well written that it was very easy to understand for me, even though I am not an expert in the field. Bringing the section Overview to the same didadactic level would be an asset. The questions I would like to have answered are: How can I see that the equation is still redundant and what does the term redundant mean in this context? How does one come to the idea of defining   as the specified minimum function?

Some other proposals for improvements: The meaning of the variable   seems to be different from the one in the section Intuition. Would it be an option to use another notation? Furthermore, the section Overview repeats the definition of the algorithm in the section Intuition. However, at the same time it lists the steps of skipping samples to mitigate correlations. I suggest to put the complete list of steps into the previous section. Then, a proper name for the section Overview would e.g. be Formal Derivation.

Sorry, I am new to wikipedia and do not want to just poke your text. Please let me know if you wish me to do some of the mentioned improvements. —Contribo (talk) 20:20, 3 December 2012 (UTC)Reply

I agree with you, "overview" is not the proper name; maybe "formal derivation" as you suggested sounds better. Nevertheless, the intuition section does not present the modern view of metropolis-hastings algorithm in the sense that it does not uses a general proposal (it is always assumed symmetrical, which is not the general case. However, I think the intuition part is to understand the idea of the method; that is why it skips the fact that the proposal also has to be considered in the acceptance in the general case.
Redundant in the sense that there can be another choices for A which also fulfills the equation you mention (maybe under-determined). I don't recall my statistical physics lectures, but I know there is at least another choice of A which also fulfills it and yet, it does not use the "min".Jorgecarleitao (talk) 20:53, 3 December 2012 (UTC)Reply
Thank you for your explanations, I did not expect to get an Answer so quickly. I bring these issues into discussion not just because I would like to understand the matter but also to motivate you to work towards improving this article together with me. In my role as the learner I offer to ask you questions and you as the author could modify the article, such as to answer these questions. Of course I offer to do modifications as well — as far as I am qualified. Does this sound attractive?
In this sense let me point out some further aspects. You write the proposal also has to be considered in the acceptance in the general case. I am not sure if I get you right. Isn't this considered in the section Step-by-step Instructions? This brings me to another point: the Step-by-step Instructions should contain the skipping of T states to make the resulting sequence uncorrelated.
I noticed some more points regarding the section Overview: What is the role of ergodicity in this explanation. The text says that it can be shown that ergodicity is a requirement for the process to converge asymptotically. A reference to the proof would be great. How is it ensured that the process is ergodic? Another Point: The markov matrices in the wiki-reference have finite dimensions. However, the markov matrix for the process under discussion is infinte dimensional, this difference should at least be mentioned. --Contribo (talk) 21:24, 3 December 2012 (UTC)Reply
edit

The link which is labeled "in practice" and leads to Bayesian statistics should be rephrased so as to not surprise the reader, as per WP:EASTEREGG. There should probably be some brief remark explaining what Bayesian statistics are and how they relate, but I don't know how to write such a remark. — Preceding unsigned comment added by 132.65.153.93 (talk) 18:03, 12 November 2013 (UTC)Reply

Interacting Metropolis-Hastings methodology

edit

(this was in the main article, and even though has a lot of information, it does not belong to Metropolis-Hastings. It is about sequential Monte Carlo and Markov Chain Monte Carlo as the text below itself claims. I'm storing it here so it is not lost in history.)

The central idea is to consider an interpolating sequence of target distributions   with an increasing complexity of sampling and such that   . We let   be a function that is proportional to  , for each  . In this notation, for any k we clearly have the product formula for some normalizing constants   When the state space   is finite, for any   we have

  For probability densities w.r.t. the Lebesgue measure dx on  , for some dimension  , we have

 We assume that it is easy to sample independent random variables with common probability , and the functions   take values in   and they can be computed easily for any state x.

We illustrate these rather abstract models with a couple of examples:

  • Boltzmann-Gibbs distributions associated with an energy function   on some finite set   and some inverse temperature parameter   are defined by In this situation, we can choose the interpolating sequence  By construction, we have  
  • Let   be the probability density of some real valued random variable  . Let   be some subset such that   and set  with the indicator function   of a set  . In this situation, we can choose the interpolating sequence  By construction, we have  

For any   denote by   the transition probability of the Metropolis–Hastings algorithm with target invariant measure  . To simplify the presentation, we assume that the state space s finite.

  • Initially, we sample a large number   of independent random variables   with common probability distribution  . By the law of large numbers, we have  
  • Acceptance-rejection with recycling: The objective is to design a new population of samples   with distribution  . With a probability   we accept the state   and we set  . Otherwise, we sample a new indidual   in the current population with the probability measure

 and we set   Using the fact that  we have the estimates 

  • Proposal-exploration: The objective is to design a new population   with distribution   using the Metropolis-Hastings transitions  . From each selected states   we sample independently a transition   By the law of large numbers, we have The next acceptance-rejection transition with recycling   is defined as above by replacing   by  ; and the second proposal-exploration   is defined by moving the selected states with the Metropolis-Hastings transition  , and so on.

The interacting Metropolis-Hastings defined above belong to the class of mean field interacting particle methods[1] and Feynman-Kac particle models[2][3] (a.k.a. Sequential Monte Carlo methodologies[4]). In contrast to traditional Markov Chain Monte Carlo methods relying on the stability of Markov chain, the precision interacting Metropolis-Hastings only depends on the size   of the system, in the sense that for any time   we have

 In addition, we have the unbiased estimates of the normalizing constants In addition, for any function f(.) on S we have the Feynman-Kac formulae

 where   stands for a non homogenous Markov chain with initial distribution   and Markov transitions  .

The analysis of the convergence of Feynman-Kac particle methods has been started in 1996[5][6] and in 2000 in the book[3] and the series of articles.[7][8][9][10][11][12] More recent developments can be found in the books.[1][13] Applications of these Feynman-Kac methodologies to interacting Metropolis–Hastings algorithms are discussed in the series of articles.[4][14] The unbiased properties of the particle estimate of the ratio   are proved in[5] in the context of filtering problems. Online adaptive choices of the temperature parameters and the decreasing subsets in the above examples can also be found in.[15] In Operation Research, the interacting MCMC algorithm associated with target distributions defined in terms of indicator functions is sometimes referred as subset simulation.

References

  1. ^ a b Del Moral, Pierre (2013). Mean field simulation for Monte Carlo integration. Chapman & Hall/CRC Press. p. 626. Monographs on Statistics & Applied Probability
  2. ^ Del Moral, Pierre (2004). Feynman-Kac formulae. Genealogical and interacting particle approximations. Springer. p. 575. Series: Probability and Applications
  3. ^ a b Del Moral, Pierre; Miclo, Laurent (2000). Branching and Interacting Particle Systems Approximations of Feynman-Kac Formulae with Applications to Non-Linear Filtering (PDF). Vol. 1729. pp. 1–145. doi:10.1007/bfb0103798. {{cite book}}: |journal= ignored (help)
  4. ^ a b "Sequential Monte Carlo samplers - P. Del Moral - A. Doucet - A. Jasra - 2006 - Journal of the Royal Statistical Society: Series B (Statistical Methodology) - Wiley Online Library". onlinelibrary.wiley.com. Retrieved 2015-06-11.
  5. ^ a b Del Moral, Pierre (1996). "Non Linear Filtering: Interacting Particle Solution" (PDF). Markov Processes and Related Fields. 2 (4): 555–580.
  6. ^ Del Moral, Pierre (1998). "Measure Valued Processes and Interacting Particle Systems. Application to Non Linear Filtering Problems". Annals of Applied Probability. 8 (2) (Publications du Laboratoire de Statistique et Probabilités, 96-15 (1996) ed.): 438–495. doi:10.1214/aoap/1028903535.
  7. ^ Crisan, Dan; Del Moral, Pierre; Lyons, Terry (1999). "Discrete filtering using branching and interacting particle systems" (PDF). Markov Processes and Related Fields. 5 (3): 293–318.
  8. ^ Del Moral, Pierre; Guionnet, Alice (1999). "On the stability of Measure Valued Processes with Applications to filtering". C.R. Acad. Sci. Paris. 39 (1): 429–434.
  9. ^ Del Moral, Pierre; Guionnet, Alice (2001). "On the stability of interacting processes with applications to filtering and genetic algorithms". Annales de l'Institut Henri Poincaré. 37 (2): 155–194. doi:10.1016/s0246-0203(00)01064-5.
  10. ^ Del Moral, P.; Guionnet, A. (1999). "Central limit theorem for nonlinear filtering and interacting particle systems". The Annals of Applied Probability. 9 (2): 275–297. doi:10.1214/aoap/1029962742. ISSN 1050-5164.
  11. ^ Del Moral, Pierre; Miclo, Laurent (2001). "Genealogies and Increasing Propagation of Chaos For Feynman-Kac and Genetic Models". The Annals of Applied Probability. 11 (4): 1166–1198. doi:10.1214/aoap/1015345399. ISSN 1050-5164.
  12. ^ Del Moral, Pierre; Rio, Emmanuel (2011). "Concentration inequalities for mean field particle models". The Annals of Applied Probability. 21 (3): 1017–1052. doi:10.1214/10-AAP716. ISSN 1050-5164.
  13. ^ Del Moral, Pierre (2004). Feynman-Kac formulae. Genealogical and interacting particle approximations. http://www.springer.com/gp/book/9780387202686: Springer. Series: Probability and Applications. p. 556. ISBN 978-0-387-20268-6. {{cite book}}: External link in |location= (help)CS1 maint: location (link)
  14. ^ Del Moral, Pierre; Hu, Peng; Wu, Liming (2012). On the Concentration Properties of Interacting Particle Processes. Hanover, MA, USA: Now Publishers Inc. ISBN 1601985126.
  15. ^ Del Moral, Pierre; Doucet, Arnaud; Jasra, Ajay (2012). "On Adaptive Resampling Procedures for Sequential Monte Carlo Methods" (PDF). Bernoulli. 18 (1): 252–278. doi:10.3150/10-bej335.

Thinning

edit

The paragraph on thinning is very poor. In particular the statement "this means that if we want a set of independent samples, we have to throw away the majority of samples and only take every nth sample" is plain wrong.

There is no such guarantee of independence, regardless of the interval chosen. In addition, most authors advise against the practice, since while thinning reduces correlation between subsequent steps, it also greatly reduces the number of samples finally obtained. There are some exceptions, as Art Owen discusses here http://statweb.stanford.edu/~owen/reports/bestthinning.pdf.

Patrick1894 (talk) 14:42, 17 April 2019 (UTC)Reply

Rename? (In Wikipedia parlance suggested Move)

edit

In my experience, supported for example by the frequencies of the terms "Metropolis", "Metropolis-Hastings" and "Monte Carlo" recorded between 1953 and 2012 in Google's Ngram viewer [1], this algorithm is far more often called "Metropolis" than "Metropolis-Hastings." Therefore I suggest the article be renamed (moved to) "Metropolis algorithm" with Metropolis-Hastings redirecting to it, and with a subsection, now lacking, that specifically explains and compares Hastings' 1970 improvement and generalization to the original 1953 algorithm.CharlesHBennett (talk) 22:06, 22 January 2020 (UTC)Reply

Neutral. In my experience, "Metropolis" is more common than "Metropolis-Hastings" as well. However, I don't think we get a lot by moving this. Metropolis algorithm already redirects to this article. BernardoSulzbach (talk) 10:59, 23 January 2020 (UTC)Reply

Formal derivation of transition probability incomplete

edit

It is stated that the transition probability   can be separated into a proposal step and accept/reject step. Mathematically,

 

where   is the proposal distribution and   the acceptance distribution.

This transition distribution does not take into account rejection however. The transition probability is complete with an additional rejection term

 

where   is understood as the Dirac delta function. It is only later in the derivation with the choice for   that the last term vanishes, since a transition where   will always be accepted ( ). — Preceding unsigned comment added by 213.125.242.82 (talk) 19:46, 8 April 2020 (UTC)Reply

Oughtn't the core procedure be made a tad clearer? Suggest revision.

edit

Calculate the acceptance ratio  , which will be used to decide whether to accept or reject the candidate. Because f is proportional to the density of P, we have that  .


  • Accept or reject:
    • Generate a uniform random number  .
    • If  , then accept the candidate by setting  ,
    • If  , then reject the candidate and set   instead.


May I suggest a rewrite as follows which shows the Hastings step more clearly? The above is correct, but:

  • It forces a generation of   unnecessarily when  , and
  • It requires thinking to see that it works, specifically in the case  .


Suggested revision:

  • Accept or reject:
    • If  , then accept the candidate by setting  .
    • If  , then
      • Generate a uniform random number  .
      • If  , then accept the candidate with probability   by setting  .
      • If  , then reject the candidate and set set   instead.


bayesianlogic.1@gmail.com

 This user is a member of WikiProject Statistics.
19:28, 14 December 2020 (UTC)


The plot with the three chains needs clarification

edit

The rosenbrock function is typically used as an example for minimization, not maximization. The Metropolis Hastings can be seen as a maximization algorithm though, and indeed the caption is written as if we were maximizing something. Perhaps the function being used is 1/rosenbrock, in such a way that the minimum becomes a maximum?


Lnz.Rossi (talk) 16:59, 22 May 2021 (UTC)Reply

Maximization of some function f is the same as minimization of -f. Here, instead of minimizing Rosenbrock, the Metropolis-Hastings algorithm is maximizing -1×Rosenbrock. RFZYNSPY talk 02:47, 21 March 2024 (UTC)Reply

Acceptance ratio formula failing to parse

edit

Under 'Intuition', I see the following parsing error instead of the acceptance ratio formula: "Calculate the acceptance ratio Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "/mathoid/local/v1/":): {\displaystyle \alpha = f(x')/f(x_t)} , which will be used to decide whether to accept or reject the candidate. Because f is proportional to the density of P, we have that..."

When I try to edit, this error doesn't show up in the preview, so not sure how to fix this. 128.12.123.150 (talk) 03:22, 17 February 2023 (UTC)Reply

It looks fine from my end. Perhaps some sort of browser incompatibility? Qflib, aka KeeYou Flib (talk) 04:43, 18 February 2023 (UTC)Reply