Fast Robust Model Selection in Large Datasets

Accéder

Auteur(s)

Dupuis, Debbie

Accéder

Texte intégral indisponible

Beschreibung

Large datasets are more and more common in many research fields. In particular, in the linear regression context, it is often the case that a huge number of potential covariates are available to explain a response variable, and the first step of a reasonable statistical analysis is to reduce the number of covariates. This can be done in a forward selection procedure that includes the selection of the variable to enter, the decision to retain it or stop the selection and estimation of the augmented model. Least squares plus t-tests can be fast, but the outcome of a forward selection might be suboptimal when there are outliers. In this paper, we propose a complete algorithm for fast robust model selection, including considerations for huge sample sizes. Since simply replacing the classical statistical criteria by robust ones is not computationally possible, we develop simplified robust estimators, selection criteria and testing procedures for linear regression. The robust estimator is a one-step weighted M-estimator that can be biased if the covariates are not orthogonal. We show that the bias can be made smaller by iterating the M-estimator one or more steps further. In the variable selection process, we propose a simplified robust criterion based on a robust t-statistic that we compare to a false discovery rate adjusted level. We carry out a simulation study to show the good performance of our approach. We also analyze two datasets and show that the results obtained by our method outperform those from robust LARS and random forests.

Institution partenaire

Langue

English

Datum

2011

Le portail de l'information économique suisse

© 2016 Infonet Economy