Cosmin Anitescu: Methods Based on Artificial Neural Networks for the Solution of Partial Differential Equations

Machine Learning Seminar presentation

Topic: Methods Based on Artificial Neural Networks for the Solution of Partial Differential Equations

Speaker: Dr. Cosmin Anitescu, Bauhaus-Universität Weimar

Time: Wednesday 2020.10.28, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Machine learning and methods based on artificial neural networks have become increasingly common in a variety of topics for areas such as image processing, voice recognition, and object detection. The success in these areas has also led to optimized hardware and software solutions for efficiently training large neural networks and solving previously intractable problems. There is also a great deal of interest in using these techniques for solving complex engineering problems.

In this talk, I will give a brief overview of some algorithms for solving partial differential equations using artificial neural networks, particularly with regard to dealing with the boundary conditions. I will also discuss some possibilities for adaptively choosing the training points and possibilities for further improvements in the efficiency and reliability of neural network-based PDE solvers.

Additional material:

(1) An energy approach to the solution of partial differential equations in computational mechanics via machine learning: Concepts, implementation and applications https://doi.org/10.1016/j.cma.2019.112790 or https://arxiv.org/abs/1908.10407

 

 

Eleni Koronaki: “Dinky, Dirty, Dynamic & Deceptive Data (1)”: An overview of hybrid machine learning and equation-based modelling

Machine Learning Seminar presentation

Topic: “Dinky, Dirty, Dynamic & Deceptive Data (1)”: An overview of hybrid machine learning and equation-based modelling

Speaker: Dr. Eleni Koronaki, University of Luxembourg, Department of Computational Science

Time: Wednesday 2020.10.21, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

In the era of “Big Data”, machine learning frameworks are attractive candidates for leveraging abundant data and transforming it into meaningful information. Despite the success of methods such as Deep Neural Networks in diverse sectors, ranging from finance to healthcare and language recognition, to name just a few, their implementation in traditional engineering fields is not universal. The reason is twofold: Firstly, first principles-based models, albeit computationally expensive remain consistent decision-making tools, more so now with the evolution of computational algorithms and infrastructure. Secondly, in many applications, the available data is not “big” enough to ensure accuracy and reliability of machine learning workflows. This dichotomy has not been unnoticed in the engineering community and various efforts to address this have been published, surprisingly as early as in the early 90s.

Nowadays the advent of Physics-Informed Neural Networks (PINNs) revisits older concepts with remarkable results. In this presentation, an illuminating overview is attempted of the “hybrid physics-informed machine learning” paradigm.

Additional material:

(1) Dr. A. Kott

 

 

Saurabh Deshpande: Data-Driven Hyper-elastic Simulations

Machine Learning Seminar presentation

Topic: Data-Driven Hyper-elastic Simulations

Speaker: Saurabh Deshpande, University of Luxembourg, Department of Computational Science

Time: Wednesday 2020.10.14, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Since a decade, machine learning has started to revolutionize several fields due to the development of new algorithms and the availability of more data every day. Deep learning, a class of machine learning methods based on learning data representations, has demonstrated strong abilities at extracting high-level representations of complex processes. In this work, we will implement a particular class of machine learning architecture called Convolutional Neural Network (CNN) to replace the finite element solver for 3D hyper-elastic simulations. Also, we will briefly touch upon the dropout technique and its possible use in predicting uncertainties of the neural network predictions.

Additional material:

1). Simulation of hyperelastic materials in real-time using Deep Learning – https://arxiv.org/abs/1904.06197

2) Dropout as Bayesian approximation – https://arxiv.org/abs/1506.02142

 

Vasilis Krokos: Bayesian Neural Networks for uncertainty estimation on regression problems

Machine Learning Seminar Presentation

Topic: Bayesian Neural Networks for uncertainty estimation on regression problems

Speaker: Vasilis Krokos, University of Cardiff & Synopsys-Simpleware

Time: Wednesday 2020.10.07, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Although nowadays Neural Networks (NN) are vastly used in numerous applications, traditionally they lack a very important characteristic. They fail to incorporate uncertainty into their predictions. NNs are notoriously known to extrapolate really badly, something that can lead to catastrophic consequences. It is obvious that it is essential to incorporate uncertainty into any model used for critical tasks. A common example of this type of model is a Gaussian Process (GP). The GP is a stochastic regression model that outputs a mean prediction along with credible intervals, thus quantifying the uncertainty. Unfortunately, GPs are known to scale very badly to large datasets and can’t be used in tasks like classification, segmentation, etc where the inputs are images while Convolutional Neural Networks clearly dominate this field. In this work we will discuss how we can transform any deterministic NN into a probabilistic one, using the Bayes by Backpropagation method [Blundell et al., 2015], and we will test this NN into a simple 1D regression case.

Additional material:

1) [Blundell et al., 2015] https://arxiv.org/abs/1505.05424

2) a blog about GPs, that might be useful for people not familiar with GPs: http://krasserm.github.io/2018/03/19/gaussian-processes/

3) a very interesting extension to TensorFlow for probabilistic learning: https://www.tensorflow.org/probability