In statistics, the g-prior is an objective prior for the regression coefficients of a multiple regression. It was introduced by Arnold Zellner.[1] It is a key tool in Bayes and empirical Bayes variable selection.[2][3]

Definition

edit

Consider a data set  , where the   are Euclidean vectors and the   are scalars. The multiple regression model is formulated as

 

where the   are random errors. Zellner's g-prior for   is a multivariate normal distribution with covariance matrix proportional to the inverse Fisher information matrix for  , similar to a Jeffreys prior.

Assume the   are i.i.d. normal with zero mean and variance  . Let   be the matrix with  th row equal to  . Then the g-prior for   is the multivariate normal distribution with prior mean a hyperparameter   and covariance matrix proportional to  , i.e.,

 

where g is a positive scalar parameter.

Posterior distribution of beta

edit

The posterior distribution of   is given as

 

where   and

 

is the maximum likelihood (least squares) estimator of  . The vector of regression coefficients   can be estimated by its posterior mean under the g-prior, i.e., as the weighted average of the maximum likelihood estimator and  ,

 

Clearly, as g →∞, the posterior mean converges to the maximum likelihood estimator.

Selection of g

edit

Estimation of g is slightly less straightforward than estimation of  . A variety of methods have been proposed, including Bayes and empirical Bayes estimators.[3]

References

edit
  1. ^ Zellner, A. (1986). "On Assessing Prior Distributions and Bayesian Regression Analysis with g Prior Distributions". In Goel, P.; Zellner, A. (eds.). Bayesian Inference and Decision Techniques: Essays in Honor of Bruno de Finetti. Studies in Bayesian Econometrics and Statistics. Vol. 6. New York: Elsevier. pp. 233–243. ISBN 978-0-444-87712-3.
  2. ^ George, E.; Foster, D. P. (2000). "Calibration and empirical Bayes variable selection". Biometrika. 87 (4): 731–747. CiteSeerX 10.1.1.18.3731. doi:10.1093/biomet/87.4.731.
  3. ^ a b Liang, F.; Paulo, R.; Molina, G.; Clyde, M. A.; Berger, J. O. (2008). "Mixtures of g priors for Bayesian variable selection". Journal of the American Statistical Association. 103 (481): 410–423. CiteSeerX 10.1.1.206.235. doi:10.1198/016214507000001337.

Further reading

edit
  • Datta, Jyotishka; Ghosh, Jayanta K. (2015). "In Search of Optimal Objective Priors for Model Selection and Estimation". In Upadhyay, Satyanshu Kumar; et al. (eds.). Current Trends in Bayesian Methodology with Applications. CRC Press. pp. 225–243. ISBN 978-1-4822-3511-1.
  • Marin, Jean-Michel; Robert, Christian P. (2007). "Regression and Variable Selection". Bayesian Core : A Practical Approach to Computational Bayesian Statistics. New York: Springer. pp. 47–84. doi:10.1007/978-0-387-38983-7_3. ISBN 978-0-387-38979-0.