A Prediction Divergence Criterion for Model Selection
Auteur(s)
Accéder
Texte intégral indisponibleDescription
In this paper, we propose a new criterion for selection between nested models. We suppose that the correct model is one (or near one) of the available models and construct a criterion which is based on the Bregman divergence between the out-of-sample prediction of the smaller model and the in-sample prediction of the larger model. This criterion, the prediction divergence criterion (PDC), is different from the ones that are often used like the AIC, BIC, Cp, in that, in a sequential approach, it directly considers the prediction divergence between two models, rather that differences between the former criteria evaluated at two different models. We derive an estimator for the PDC (PDCE) using Efron (2004) approach on parametric covariance penalty method, and for the linear model and smoothing splines, we show that the PDCE on a suitable sequence of nested models that we formalize, selects the correct model with probability 1 as the sample size tends to infinity. In finite samples, we compare the performance of our criterion to the other ones as well as to the lasso, as find that it outperforms the other criteria in terms of prediction error in sparse situations.
Institution partenaire
Langue
Date
Le portail de l'information économique suisse
© 2016 Infonet Economy