Abstract:
|
Empirical likelihood based methods are based on nonparametric estimator of the data distribution computed under constraints imposed by parameter dependent estimating equations. Thus, they efficiently combine the flexibility of a nonparametric procedure together with the interpretabilty of a parametric model. It is well known that even for simple regression models, the support of the empirical likelihood is a nonconvex set. Numerical procedures do not behave well for such supports. In most cases they are slow and require complicated tuning to converge to a optimum value. In recent times, there has been a considerable interest on the properties of the gradient of log empirical likelihoods. It has been shown that under mild conditions, with a high probability, at least one component of these gradient vectors diverge at the boundary of the support. In this talk we discuss such properties of the gradient in details. We would consider several potential use of the gradient vector in the application of empirical likelihood based methods in both frequentist and Bayesian paradigms.
|