Abstract:
|
Quantile regression analysis is commonly used in many applications, e.g., economics and health care, for more complete understanding on conditional distribution. This work focuses on the learnability of quantile regression trees uniformly across different quantile losses in different settings. Our main contribution is to obtain bounds that simultaneously account for a continuum of regrets generated by each quantile level. Based on empirical process theory in i.i.d. and sequential cases, we derive uniform regret bounds across different quantile levels (i) for the empirical minimizer in i.i.d. case, (ii) in minimax sense and (iii) for the exponential weights algorithm (EWA). (ii) and (iii) are both for time dependent data and under a mild assumption on data distribution. It turns out that the bound on uniform minimax regret is sub-linear and with poly-logarithmic factor in sample size. The uniform regret bound derived from EWA is similar to that of (ii).
|