Abstract:
|
In conventional neural networks, weights on each layer's neuron are usually some certain point without any variability on them. Here we proposed a probabilistic model by assuming the weights of neural networks obeys some certain distribution based on input data, first we achieve the point estimate of weights through back-propagation from minimize loss function of the neural network, then we used a Gaussian distribution to estimate the posterior distribution of model weights by minimizing the Kullback–Leibler divergence while updating the Gaussian parameter through Bayes by Backprop method, and finally gave a confidence-interval estimate for the outputs. After applying the neural network on synthetic data set and one conventional data set, we compared our method to other methods. At the end of slides we discussed the future direction for this research.
|