Papers
Topics
Authors
Recent
Search
2000 character limit reached

Discriminating between two models based on Bregman divergence in small samples

Published 29 Sep 2017 in stat.ME | (1709.10505v1)

Abstract: Recently in [1, 2], Ali-Akbar Bromideh introduced the Kullback-Leibler Divergence (KLD) test statistic in discrim- inating between two models. It was found that the Ratio Minimized Kulback-Leibler Divergence (RMKLD) works better than the Ratio of Maximized Likelihood (RML) for small sample size. The aim of this paper is to generalize the works of Ali-Akbar Bromideh by proposing a hypothesis testing based on Bregman divergence in order to improve the process of choice of the model. Our aproach differs from him. After observing n data points of unknown density f ; we firstly measure the closness between the bias reduced kernel density estimator and the first estimated candidate model. Secondly between the bias reduced kernel density estimator and the second estimated candidate model. In these two cases Bregman Divergence (BD) and the bias reduced kernel estimator [3] focuses on improving the con- vergence rates of kernel density estimators are used. Our testing procedure for model selection is thus based on the comparison of the value of model selection test statistic to critical values from a standard normal table. We establish the asymptotic properties of Bregman divergence estimator and approximations of the power functions are deduced. The multi-step MLE process will be used to estimate the parameters of the models. We explain the applicability of the BD by a real data set and by the data generating process (DGP). The Monte Carlo simulation and then the numerical analysis will be used to interpret the result.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.