A Stein discrepancy is a statistical divergence between two probability measures that is rooted in Stein's method. It was first formulated as a tool to assess the quality of Markov chain Monte Carlo samplers,[1] but has since been used in diverse settings in statistics, machine learning and computer science.[2]

Definition

edit

Let   be a measurable space and let   be a set of measurable functions of the form  . A natural notion of distance between two probability distributions  ,  , defined on  , is provided by an integral probability metric[3]

 

where for the purposes of exposition we assume that the expectations exist, and that the set   is sufficiently rich that (1.1) is indeed a metric on the set of probability distributions on  , i.e.   if and only if  . The choice of the set   determines the topological properties of (1.1). However, for practical purposes the evaluation of (1.1) requires access to both   and  , often rendering direct computation of (1.1) impractical.

Stein's method is a theoretical tool that can be used to bound (1.1). Specifically, we suppose that we can identify an operator   and a set   of real-valued functions in the domain of  , both of which may be  -dependent, such that for each   there exists a solution   to the Stein equation

 

The operator   is termed a Stein operator and the set   is called a Stein set. Substituting (1.2) into (1.1), we obtain an upper bound

  .

This resulting bound

 

is called a Stein discrepancy.[1] In contrast to the original integral probability metric  , it may be possible to analyse or compute   using expectations only with respect to the distribution  .

Examples

edit

Several different Stein discrepancies have been studied, with some of the most widely used presented next.

Classical Stein discrepancy

edit

For a probability distribution   with positive and differentiable density function   on a convex set  , whose boundary is denoted  , the combination of the Langevin–Stein operator   and the classical Stein set

 

yields the classical Stein discrepancy.[1] Here   denotes the Euclidean norm and   the Euclidean inner product. Here   is the associated operator norm for matrices  , and   denotes the outward unit normal to   at location  . If   then we interpret  .

In the univariate case  , the classical Stein discrepancy can be computed exactly by solving a quadratically constrained quadratic program.[1]

Graph Stein discrepancy

edit

The first known computable Stein discrepancies were the graph Stein discrepancies (GSDs). Given a discrete distribution  , one can define the graph   with vertex set   and edge set  . From this graph, one can define the graph Stein set as

 

The combination of the Langevin–Stein operator and the graph Stein set is called the graph Stein discrepancy (GSD). The GSD is actually the solution of a finite-dimensional linear program, with the size of   as low as linear in  , meaning that the GSD can be efficiently computed.[1]

Kernel Stein discrepancy

edit

The supremum arising in the definition of Stein discrepancy can be evaluated in closed form using a particular choice of Stein set. Indeed, let   be the unit ball in a (possibly vector-valued) reproducing kernel Hilbert space   with reproducing kernel  , whose elements are in the domain of the Stein operator  . Suppose that

  • For each fixed  , the map   is a continuous linear functional on  .
  •  .

where the Stein operator   acts on the first argument of   and   acts on the second argument. Then it can be shown[4] that

 ,

where the random variables   and   in the expectation are independent. In particular, if   is a discrete distribution on  , then the Stein discrepancy takes the closed form

 

A Stein discrepancy constructed in this manner is called a kernel Stein discrepancy[5][6][7][8] and the construction is closely connected to the theory of kernel embedding of probability distributions.

Let   be a reproducing kernel. For a probability distribution   with positive and differentiable density function   on  , the combination of the Langevin--Stein operator   and the Stein set

 

associated to the matrix-valued reproducing kernel  , yields a kernel Stein discrepancy with[5]

 

where   (resp.  ) indicated the gradient with respect to the argument indexed by   (resp.  ).

Concretely, if we take the inverse multi-quadric kernel   with parameters   and   a symmetric positive definite matrix, and if we denote  , then we have

 .

Diffusion Stein discrepancy

edit

Diffusion Stein discrepancies[9] generalize the Langevin Stein operator   to a class of diffusion Stein operators  , each representing an Itô diffusion that has   as its stationary distribution. Here,   is a matrix-valued function determined by the infinitesimal generator of the diffusion.

Other Stein discrepancies

edit

Additional Stein discrepancies have been developed for constrained domains,[10] non-Euclidean domains[11][12][10], discrete domains,[13][14] improved scalability.[15][16], and gradient-free Stein discrepancies where derivatives of the density   are circumvented.[17] Furthermore, this approach is expanded into the Gradient-Free Kernel Conditional Stein Discrepancy, which targets conditional distributions.[18]

Properties

edit

The flexibility in the choice of Stein operator and Stein set in the construction of Stein discrepancy precludes general statements of a theoretical nature. However, much is known about the particular Stein discrepancies.

Computable without the normalisation constant

edit

Stein discrepancy can sometimes be computed in challenging settings where the probability distribution   admits a probability density function   (with respect to an appropriate reference measure on  ) of the form  , where   and its derivative can be numerically evaluated but whose normalisation constant   is not easily computed or approximated. Considering (2.1), we observe that the dependence of   on   occurs only through the term

 

which does not depend on the normalisation constant  .

Stein discrepancy as a statistical divergence

edit

A basic requirement of Stein discrepancy is that it is a statistical divergence, meaning that   and   if and only if  . This property can be shown to hold for classical Stein discrepancy[1] and kernel Stein discrepancy[6][7][8] a provided that appropriate regularity conditions hold.

Convergence control

edit

A stronger property, compared to being a statistical divergence, is convergence control, meaning that   implies   converges to   in a sense to be specified. For example, under appropriate regularity conditions, both the classical Stein discrepancy and graph Stein discrepancy enjoy Wasserstein convergence control, meaning that   implies that the Wasserstein metric between   and   converges to zero.[1][19][9] For the kernel Stein discrepancy, weak convergence control has been established[8][20] under regularity conditions on the distribution   and the reproducing kernel  , which are applicable in particular to (2.1). Other well-known choices of  , such as based on the Gaussian kernel, provably do not enjoy weak convergence control.[8]

Convergence detection

edit

The converse property to convergence control is convergence detection, meaning that   whenever   converges to   in a sense to be specified. For example, under appropriate regularity conditions, classical Stein discrepancy enjoys a particular form of mean square convergence detection[1][9], meaning that   whenever   converges in mean-square to   and   converges in mean-square to  . For kernel Stein discrepancy, Wasserstein convergence detection has been established,[8] under appropriate regularity conditions on the distribution   and the reproducing kernel  .

Applications of Stein discrepancy

edit

Several applications of Stein discrepancy have been proposed, some of which are now described.

Optimal quantisation

edit
Optimal quantisation using Stein discrepancy. The contours in this video represent level sets of a continuous probability distribution  and we consider the task of summarising this distribution with a discrete set of states   selected from its domain  . In particular, we suppose that the density function   is known only up to proportionality, a setting where Markov chain Monte Carlo (MCMC) methods are widely used. In the first half of this video a Markov chain produces samples that are approximately distributed from  , with the sample path shown in black. In the second half of the video an algorithm, called Stein thinning,[21] is applied to select a subset of states from the sample path, with selected states shown in red. These states are selected based on greedy minimisation of a Stein discrepancy between the discrete distribution and  . Together, the selected states provide an approximation of   that, in this instance, is more accurate than that provided by the original MCMC output.

Given a probability distribution   defined on a measurable space  , the quantization task is to select a small number of states   such that the associated discrete distribution   is an accurate approximation of   in a sense to be specified.

Stein points[20] are the result of performing optimal quantisation via minimisation of Stein discrepancy:

 

Under appropriate regularity conditions, it can be shown[20] that   as  . Thus, if the Stein discrepancy enjoys convergence control, it follows that   converges to  . Extensions of this result, to allow for imperfect numerical optimisation, have also been derived.[20][22][21]

Sophisticated optimisation algorithms have been designed to perform efficient quantisation based on Stein discrepancy, including gradient flow algorithms that aim to minimise kernel Stein discrepancy over an appropriate space of probability measures.[23]

Optimal weighted approximation

edit

If one is allowed to consider weighted combinations of point masses, then more accurate approximation is possible compared to (3.1). For simplicity of exposition, suppose we are given a set of states  . Then the optimal weighted combination of the point masses  , i.e.

 

which minimise Stein discrepancy can be obtained in closed form when a kernel Stein discrepancy is used.[5] Some authors[24][25] consider imposing, in addition, a non-negativity constraint on the weights, i.e.  . However, in both cases the computation required to compute the optimal weights   can involve solving linear systems of equations that are numerically ill-conditioned. Interestingly, it has been shown[21] that greedy approximation of   using an un-weighted combination of   states can reduce this computational requirement. In particular, the greedy Stein thinning algorithm

 

has been shown to satisfy an error bound

 

Non-myopic and mini-batch generalisations of the greedy algorithm have been demonstrated[26] to yield further improvement in approximation quality relative to computational cost.

Variational inference

edit

Stein discrepancy has been exploited as a variational objective in variational Bayesian methods.[27][28] Given a collection   of probability distributions on  , parametrised by  , one can seek the distribution in this collection that best approximates a distribution   of interest:

 

A possible advantage of Stein discrepancy in this context,[28] compared to the traditional Kullback–Leibler variational objective, is that   need not be absolutely continuous with respect to   in order for   to be well-defined. This property can be used to circumvent the use of flow-based generative models, for example, which impose diffeomorphism constraints in order to enforce absolute continuity of   and  .

Statistical estimation

edit

Stein discrepancy has been proposed as a tool to fit parametric statistical models to data. Given a dataset  , consider the associated discrete distribution  . For a given parametric collection   of probability distributions on  , one can estimate a value of the parameter   which is compatible with the dataset using a minimum Stein discrepancy estimator[29]

 

The approach is closely related to the framework of minimum distance estimation, with the role of the "distance" being played by the Stein discrepancy. Alternatively, a generalised Bayesian approach to estimation of the parameter   can be considered[4] where, given a prior probability distribution with density function  ,  , (with respect to an appropriate reference measure on  ), one constructs a generalised posterior with probability density function

 

for some   to be specified or determined.

Hypothesis testing

edit

The Stein discrepancy has also been used as a test statistic for performing goodness-of-fit testing[6][7] and comparing latent variable models.[30] Since the aforementioned tests have a computational cost quadratic in the sample size, alternatives have been developed with (near-)linear runtimes.[31][15]

See also

edit

References

edit
  1. ^ a b c d e f g h J. Gorham and L. Mackey. Measuring Sample Quality with Stein's Method. Advances in Neural Information Processing Systems, 2015.
  2. ^ Anastasiou, A., Barp, A., Briol, F-X., Ebner, B., Gaunt, R. E., Ghaderinezhad, F., Gorham, J., Gretton, A., Ley, C., Liu, Q., Mackey, L., Oates, C. J., Reinert, G. & Swan, Y. (2021). Stein’s method meets statistics: A review of some recent developments. arXiv:2105.03481.
  3. ^ Müller, Alfred (1997). "Integral Probability Metrics and Their Generating Classes of Functions". Advances in Applied Probability. 29 (2): 429–443. doi:10.2307/1428011. ISSN 0001-8678.
  4. ^ a b Mastubara, T., Knoblauch, J., Briol, F-X., Oates, C. J. Robust Generalised Bayesian Inference for Intractable Likelihoods. arXiv:2104.07359.
  5. ^ a b c Oates, C. J., Girolami, M., & Chopin, N. (2017). Control functionals for Monte Carlo integration. Journal of the Royal Statistical Society B: Statistical Methodology, 79(3), 695–718.
  6. ^ a b c Liu, Q., Lee, J. D., & Jordan, M. I. (2016). A kernelized Stein discrepancy for goodness-of-fit tests and model evaluation. International Conference on Machine Learning, 276–284.
  7. ^ a b c Chwialkowski, K., Strathmann, H., & Gretton, A. (2016). A kernel test of goodness of fit. International Conference on Machine Learning, 2606–2615.
  8. ^ a b c d e Gorham J, Mackey L. Measuring sample quality with kernels. International Conference on Machine Learning 2017 Jul 17 (pp. 1292-1301). PMLR.
  9. ^ a b c Gorham, J., Duncan, A. B., Vollmer, S. J., & Mackey, L. (2019). Measuring sample quality with diffusions. The Annals of Applied Probability, 29(5), 2884-2928.
  10. ^ a b Shi, J., Liu, C., & Mackey, L. (2021). Sampling with Mirrored Stein Operators. arXiv preprint arXiv:2106.12506
  11. ^ Barp A, Oates CJ, Porcu E, Girolami M. A Riemann-Stein kernel method. arXiv preprint arXiv:1810.04946. 2018.
  12. ^ Xu W, Matsuda T. Interpretable Stein Goodness-of-fit Tests on Riemannian Manifolds. In ICML 2021.
  13. ^ Yang J, Liu Q, Rao V, Neville J. Goodness-of-fit testing for discrete distributions via Stein discrepancy. In ICML 2018 (pp. 5561-5570). PMLR.
  14. ^ Shi J, Zhou Y, Hwang J, Titsias M, Mackey L. Gradient Estimation with Discrete Stein Operators. arXiv preprint arXiv:2202.09497. 2022.
  15. ^ a b Huggins JH, Mackey L. Random Feature Stein Discrepancies. In NeurIPS 2018.
  16. ^ Gorham J, Raj A, Mackey L. Stochastic Stein Discrepancies. In NeurIPS 2020.
  17. ^ Fisher M, Oates CJ. Gradient-Free Kernel Stein Discrepancy. arXiv preprint arXiv:2207.02636. 2022.
  18. ^ Afzali, Elham and Muthukumarana, Saman. Gradient-Free Kernel Conditional Stein Discrepancy Goodness of Fit Testing. Machine Learning with Applications, vol. 12, pp. 100463, 2023. Elsevier.
  19. ^ Mackey, L., & Gorham, J. (2016). Multivariate Stein factors for a class of strongly log-concave distributions. Electronic Communications in Probability, 21, 1-14.
  20. ^ a b c d Chen WY, Mackey L, Gorham J, Briol FX, Oates CJ. Stein points. In International Conference on Machine Learning 2018 (pp. 844-853). PMLR.
  21. ^ a b c Riabiz M, Chen W, Cockayne J, Swietach P, Niederer SA, Mackey L, Oates CJ. Optimal thinning of MCMC output. Journal of the Royal Statistical Society B: Statistical Methodology, to appear. 2021. arXiv:2005.03952
  22. ^ Chen WY, Barp A, Briol FX, Gorham J, Girolami M, Mackey L, Oates CJ. Stein Point Markov Chain Monte Carlo. International Conference on Machine Learning (ICML 2019). arXiv:1905.03673
  23. ^ Korba A, Aubin-Frankowski PC, Majewski S, Ablin P. "Kernel Stein Discrepancy Descent." arXiv preprint arXiv:2105.09994. 2021.
  24. ^ Liu Q, Lee J. Black-box importance sampling. In Artificial Intelligence and Statistics 2017 (pp. 952-961). PMLR.
  25. ^ Hodgkinson L, Salomone R, Roosta F. The reproducing Stein kernel approach for post-hoc corrected sampling. arXiv preprint arXiv:2001.09266. 2020.
  26. ^ Teymur O, Gorham J, Riabiz M, Oates CJ. Optimal quantisation of probability measures using maximum mean discrepancy. In International Conference on Artificial Intelligence and Statistics 2021 (pp. 1027-1035). PMLR.
  27. ^ Ranganath R, Tran D, Altosaar J, Blei D. Operator variational inference. Advances in Neural Information Processing Systems. 2016;29:496-504.
  28. ^ a b Fisher M, Nolan T, Graham M, Prangle D, Oates CJ. Measure transport with kernel Stein discrepancy. International Conference on Artificial Intelligence and Statistics 2021 (pp. 1054-1062). PMLR.
  29. ^ Barp, A., Briol, F.-X., Duncan, A. B., Girolami, M., & Mackey, L. (2019). Minimum Stein discrepancy estimators. Neural Information Processing Systems, 12964–12976.
  30. ^ Kanagawa, H., Jitkrittum, W., Mackey, L., Fukumizu, K., & Gretton, A. (2019). A kernel Stein test for comparing latent variable models. arXiv preprint arXiv:1907.00586.
  31. ^ Jitkrittum W, Xu W, Szabó Z, Fukumizu K, Gretton A. A Linear-Time Kernel Goodness-of-Fit Test.