Statistik und Ökonometrie

Constrained Expectation-Maximization Algorithm for Stochastic Inertial Error Modeling: Study of Feasibility

Description: 

Stochastic modeling is a challenging task for low-cost sensors which errors can have complex spectral structures. This makes the tuning process of the INS/GNSS Kalman filter often sensitive and diffcult. For example, first-order Gauss-Markov processes are very often used in inertial sensor models. But the estimation of their parameters is a non-trivial task if the error structure is mixed with other types of noises. Such estimation is often attempted by computing and analyzing Allan variance plots. This contribution demonstrates solving situations when the estimation of error parameters by graphical interpretation is rather diffcult. The novel strategy performs direct estimation of these parameters by means of the Expectation-Maximization (EM) algorithm. The algorithm results are first analyzed with a critical and practical point of view using simulations with typically encountered error signals. These simulations show that the EM algorithm seems to perform better than the Allan variance and offers a procedure to estimate first-order Gauss-Markov processes mixed with other types of noises. At the same time, the conducted tests revealed limits of this approach that are related to the convergence and stability issues. Suggestions are given to circumvent or mitigate these problems when complexity of error structure is "reasonable". This work also highlights the fact that the suggested approach via EM algorithm and the Allan variance may not be able to estimate reasonably well parameters of complex error models, and shows the need for new estimation procedures to be developed in this context. Finally, an empirical scenario is presented to support the former findings. There, the positive effect of using the more sophisticated EM-based error modeling on a filtered trajectory is highlighted.

Isotone additive latent variable models

Description: 

For manifest variables with additive noise and for a given number of latent variables with an assumed distribution, we propose to nonparametrically estimate the association between latent and manifest variables. Our estimation is a two step procedure: first it employs standard factor analysis to estimate the latent variables as theoretical quantiles of the assumed distribution; second, it employs the additive models’ backfitting procedure to estimate the monotone nonlinear associations between latent and manifest variables. The estimated fit may suggest a different latent distribution or point to nonlinear associations. We show on simulated data how, based on mean squared errors, the nonparametric estimation improves on factor analysis. We then employ the new estimator on real data to illustrate its use for exploratory data analysis.

Assessing multivariate predictors of financial market movements : a latent factor framework for ordinal data

Description: 

Much of the trading activity in Equity markets is directed to brokerage houses. In exchange they provide so-called “soft dollars,” which basically are amounts spent in “research” for identifying profitable trading opportunities. Soft dollars represent about USD 1 out of every USD 10 paid in commissions. Obviously they are costly, and it is interesting for an institutional investor to determine whether soft dollar inputs are worth being used (and indirectly paid for) or not, from a statistical point of view. To address this question, we develop association measures between what broker–dealers predict and what markets realize. Our data are ordinal predictions by two broker–dealers and realized values on several markets, on the same ordinal scale. We develop a structural equation model with latent variables in an ordinal setting which allows us to test broker–dealer predictive ability of financial market movements. We use a multivariate logit model in a latent factor framework, develop a tractable estimator based on a Laplace approximation, and show its consistency and asymptotic normality. Monte Carlo experiments reveal that both the estimation method and the testing procedure perform well in small samples. The method is then used to analyze our dataset.

Robust Stochastic Dominance: A Semi-Parametric Approach

Description: 

Lorenz curves and second-order dominance criteria, the fundamental tools for stochastic dominance, are known to be sensitive to data contamination in the tails of the distribution. We propose two ways of dealing with the problem: (1) Estimate Lorenz curves using parametric models and (2) combine empirical estimation with a parametric (robust) estimation of the upper tail of the distribution using the Pareto model. Approach (2) is preferred because of its flexibility. Using simulations we show the dramatic effect of a few contaminated data on the Lorenz ranking and the performance of the robust semi-parametric approach (2). Since estimation is only a first step for statistical inference and since semi-parametric models are not straightforward to handle, we also derive asymptotic covariance matrices for our semi-parametric estimators.

Distributional Dominance with Trimmed Data

Description: 

Distributional dominance criteria are commonly applied to draw welfare inferences about comparisons, but conclusions drawn from empirical implementations of dominance criteria may be influenced by data contamination.We examine a nonparametric approach to refining Lorenz-type comparisons and apply the technique to two important examples from the Luxembourg Income Study database.

Robust Lorenz Curves: A Semi-Parametric Approach

Description: 

Lorenz curves and second-order dominance criteria are known to be sensitive to data contamination in the right tail of the distribution. We propose two ways of dealing with the problem: (1) Estimate Lorenz curves using parametric models for income distributions, and (2) Combine empirical estimation with a parametric (robust) estimation of the upper tail of the distribution using the Pareto model. Approach (2) is preferred because of its flexibility. Using simulations we show the dramatic effect of a few contaminated data on the Lorenz ranking and the performance of the robust approach (2). Statistical inference tools are also provided.

Robust Estimation of Personal Income Distribution Models

Description: 

Statistical problems in modelling personal income distributions include estimation procedures, testing, and model choice. Typically, the parameters of a given model are estimated by classical procedures such as maximum likelihood and leastsquares estimators. Unfortunately, the classical methods are very sensitive to model deviations such as gross errors in the data, grouping effects or model misspecifications. These deviations can ruin the values of the estimators and inequality measures and can produce false information about the distribution of the personal income in a given country. In this paper we discuss the use of robust techniques for the estimation of income distributions. These methods behave as the classical procedures at the model but are less influenced by model deviations and can be applied to general estimation problems.

The Size Problem of Bootstrap Tests when the Null is Non- or Semiparametric

Statistique - Zoug

DE
Domaines et sous domaines: 
Branchen DienstleistungenDienstleistungen (allgemein)
Branchen Landwirtschaft und natürliche RessourcenLandwirtschaft und natürliche Ressourcen
VolkswirtschaftslehreStatistik und Ökonometrie

Seiten

Le portail de l'information économique suisse

© 2016 Infonet Economy

RSS - Statistik und Ökonometrie abonnieren