Abstract:
|
Recent results in nonparametric regression show that deep learning, i.e., neural networks estimates with many hidden layers, are able to circumvent the so-called curse of dimensionality in case that suitable restrictions on the structure of the regression function hold. One key feature of the neural networks used in this context is that they are not fully connected. In this talk a review of these results is given and a new result is presented, which shows that similar results also hold for fully connected multilayer feedforward neural networks. Here the number of neurons per hidden layer is fixed and the number of hidden layers tends to infinity for sample size tending to infinity.
|