Perry de Valpine, University of California - Berkeley
Model selection methods will be discussed for hierarchical or mixed models, although many points will apply to simpler models as well. Examples of hierarchical or mixed models include state-space time-series models that incorporate measurement error; mark-recapture models with individual variability; and generalized linear mixed models. Almost the same computational work that is sometimes associated with Bayesian MCMC analysis for ecologically-structured hierarchical models can be used for maximum likelihood estimation and frequentist inference for the same models using Monte Carlo Kernel Likelihoods, Monte Carlo Expectation Maximization, or related methods. A unifying principle connecting model selection methods such as information criteria (using maximum likelihood values) and Bayes factors is the minimization of prediction error for new data, which can also be estimated by cross-validation. The statistical framework of covariance penalties is used to illustrate that these methods are all aimed at the same underlying challenges of model selection and have much in common. One implication is that both Bayesian or maximum likelihood methods can "fully incorporate uncertainty" in estimates of prediction variability. A challenge of implementing model selection methods for hierarchical or mixed models is that an MCMC posterior sample or maximum likelihood estimate does not automatically give the likelihood value or Bayes factor. These may require additional computation, for which new, efficient methods are introduced. The challenges of Bayesian or frequentist model selection have a great deal in common when viewed as minimizing prediction error and navigating bias-variance tradeoffs. A worked example for a population dynamics state-space model is given to illustrate the methods and principles.