Abstract:
|
Value-at-risk (VaR) and expected shortfall (ES) are two commonly used metrics for quantifying financial risk. In this work, we perform a simulation study to investigate the estimation performance in terms of VaR and ES between two classes of models: one uses the combination of AutoRegressive–Moving Average (ARMA) and Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) models with different distributional assumptions (parametric, non-parametric, and "semi-parametric" with parametric tail distribution based on extreme value theory) on the innovation, while the other takes a non-parametric approach, local linear quantile autoregression, which does not impose parametric assumption on the time series of interest, neither the time dynamics nor the innovation distribution. We then applied these models to several major US stocks to evaluate their empirical performance. Our results suggest that it is not possible to say a particular model is outperformed by competing models. It is concluded that some models have better performance than others for particular losses.
|