Vasilis Krokos: Bayesian Neural Networks for uncertainty estimation on regression problems

Machine Learning Seminar Presentation

Topic: Bayesian Neural Networks for uncertainty estimation on regression problems

Speaker: Vasilis Krokos, University of Cardiff & Synopsys-Simpleware

Time: Wednesday 2020.10.07, 10:00 CET

How to join: Please contact Jakub Lengiewicz


Although nowadays Neural Networks (NN) are vastly used in numerous applications, traditionally they lack a very important characteristic. They fail to incorporate uncertainty into their predictions. NNs are notoriously known to extrapolate really badly, something that can lead to catastrophic consequences. It is obvious that it is essential to incorporate uncertainty into any model used for critical tasks. A common example of this type of model is a Gaussian Process (GP). The GP is a stochastic regression model that outputs a mean prediction along with credible intervals, thus quantifying the uncertainty. Unfortunately, GPs are known to scale very badly to large datasets and can’t be used in tasks like classification, segmentation, etc where the inputs are images while Convolutional Neural Networks clearly dominate this field. In this work we will discuss how we can transform any deterministic NN into a probabilistic one, using the Bayes by Backpropagation method [Blundell et al., 2015], and we will test this NN into a simple 1D regression case.

Additional material:

1) [Blundell et al., 2015]

2) a blog about GPs, that might be useful for people not familiar with GPs:

3) a very interesting extension to TensorFlow for probabilistic learning: