Data-dependent choice of prior hyperparameters in Bayesian inference Consistency and merging of posterior distributions
Ore: 11.30 a.m.
Università Cattolica del Sacro Cuore - École Polytecnique Fédérale de Lausanne
The Bayesian inferential paradigm prescribes the specification of a prior distribution on the parameters of the statistical model. For complex models, the subjective elicitation of prior hyper-parameters can be a delicate and difficult task. This is particularly the case for hyper-parameters affecting posterior inference via complexity penalization, shrinkage effects, etc. In absence of sufficient information a priori, a principled specification of a hyper-prior distribution can be difficult too and complicate computations. It is common practice to resort to a data-driven choice of the prior hyper-parameters as a shortcut: this approach is commonly called empirical Bayes (EB). Although not rigorous from a Bayesian standpoint, the traditional folklore of EB analysis is that it provides approximations to genuine Bayesian inference, while enjoying some frequentist asymptotic guarantees. We give a new illustration of EB posterior consistency in a semiparametric estimation problem, involving the analysis of extreme multivariate events. We then drift to parametric models and focus on merging in total variation between EB and Bayesian posterior/predictive distributions, almost surely as the sample size increases. We provide new results refining those in Petrone et al. (2014) and illustrate their applications in the context of variable selection.
Microsoft Teams meeting
Join on your computer or mobile app
Click here to join the meeting