Abstract:
|
Almost all of large sample theory in statistics/econometrics/machine learning begin with a specific randomness structure on the data like iid or uniform mixing observations and so on. The basic tool for asymptotics normality however is the Taylor series expansion which is deterministic in nature. Following this line of thought we provide bounds on estimation error and linear (Bahadur) representation error for M-estimators that are deterministic. These results are finite sample (non-asymptotic) and applies to any realization of data without any assumption on independence and dependence. Newton-Kantorovich theorem plays a pivotal role in this. After a discussion of these results in the talk, we provide applications to cross-validation, sub-sampling and post-selection inference. This talk is based on https://arxiv.org/abs/1809.05172
|