Fabio Cuzzolin: The epistemic artificial intelligence project

Fabio Cuzzolin: The epistemic artificial intelligence project

Machine Learning Seminar presentation

Topic: The epistemic artificial intelligence project

Speaker: Fabio Cuzzolin, School of Engineering, Computing and Mathematics — Oxford Brookes University

Time: Wednesday, 2021.11.17, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Although artificial intelligence (AI) has improved remarkably over the last years, its inability to deal with uncertainty severely limits its future applications. In its current form, AI cannot confidently make predictions robust enough to stand the test of data generated by processes different (even by tiny details, as shown by ‘adversarial’ results) from those seen at training time. While recognising this issue under different names (e.g. ‘overfitting’ or ‘domain adaptation’), traditional machine learning seems unable to address it in nonincremental ways. As a result, even state-of-the-art AI systems suffer from brittle behaviour, and find it difficult to operate in new situations. The epistemic AI project re-imagines AI from the foundations, through a proper treatment of the “epistemic” uncertainty stemming from our forcibly partial knowledge of the world. Its overall objective is to create a new learning paradigm designed to provide worst-case guarantees on its predictions, thanks to a proper modelling of real-world uncertainties. The project aims to formulate a novel mathematical framework for optimisation under epistemic uncertainty, radically departing from current approaches that only focus on aleatory uncertainty. This new optimisation framework will in turn allow the creation of new ‘epistemic’ learning settings, spanning all the major areas of machine learning: unsupervised learning, supervised learning and reinforcement learning. Last but not least, the project aims to foster an ecosystem of academic, research, industry and societal partners throughout Europe able to drive and sustain the EU’s leadership ambition in the search for a next-generation AI.

Additional material:

Video recording: https://youtu.be/GNCKqoQODR0

Marharyta Aleksandrova: Causal Inference & Causal Learning: Towards Causal ML (Part 2 of 3)

Marharyta Aleksandrova: Causal Inference & Causal Learning: Towards Causal ML (Part 2 of 3)

Machine Learning Seminar presentation

Topic: Causal Inference & Causal Learning: Towards Causal ML (Part 2 of 3)

Speaker: Marharyta Aleksandrova, Faculty of Science, Technology and Medicine; University of Luxembourg

Time: Wednesday, 2021.11.10, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

In this seminar, we will discuss the basics of the Causality theory: Causal Inference (how to measure the strength of a causal effect) and Causal Learning (how to learn causal structure, that is how to identify which variable is the cause and which is the effect). We will talk of what analysts should consider if they want to make causal conclusions from their data analysis. The lectures will be accompanied with a series of practical demonstrations.

Additional material:

Presentation and collab: https://drive.google.com/file/d/1xGsyCEwaCGsTiiqXfgohi1FEy66zOc0n/view?usp=sharing

Further reading: https://docs.google.com/document/d/15_YfYsQrRQHaJ62FLIeuG91aSI6-lqiXz8Rqvbl9UmA/edit

Video recording: https://youtu.be/9Qp2Lb0FaQE

Marharyta Aleksandrova: Causal Inference & Causal Learning: Towards Causal ML (Part 1 of 3)

Marharyta Aleksandrova: Causal Inference & Causal Learning: Towards Causal ML (Part 1 of 3)

Machine Learning Seminar presentation

Topic: Causal Inference & Causal Learning: Towards Causal ML (Part 1 of 3)

Speaker: Marharyta Aleksandrova, Faculty of Science, Technology and Medicine; University of Luxembourg

Time: Wednesday, 2021.11.03, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

In this seminar, we will discuss the basics of the Causality theory: Causal Inference (how to measure the strength of a causal effect) and Causal Learning (how to learn causal structure, that is how to identify which variable is the cause and which is the effect). We will talk of what analysts should consider if they want to make causal conclusions from their data analysis. The lectures will be accompanied with a series of practical demonstrations.

Additional material:

Presentation and collab: https://drive.google.com/file/d/1xGsyCEwaCGsTiiqXfgohi1FEy66zOc0n/view?usp=sharing

Video recording: https://youtu.be/K8L5WEFA3hA

Vladimir Despotovic: Markov Logic Networks: A step towards interpretable AI

Vladimir Despotovic: Markov Logic Networks: A step towards interpretable AI

Machine Learning Seminar presentation

Topic: Markov Logic Networks: A step towards interpretable AI

Speaker: Vladimir Despotovic, Faculty of Science, Technology and Medicine; University of Luxembourg

Time: Wednesday, 2021.10.27, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Markov Logic Networks are highly expressive statistical relational models that combine complex relational information expressed by the first-order logic formulas with the uncertainty represented by the use of the undirected probabilistic graphical models (Markov networks). They allow for representation of a relational structure and uncertainty in a very compact manner, leading to human-interpretable models. In this talk we will discuss the theoretical background of the Markov logic networks and showcase the application in the spoken language understanding domain.

Additional material:

Paris Papavasileiou: A data-driven approach for the prediction and optimization of an industrial scale chemical vapour deposition process

Paris Papavasileiou: A data-driven approach for the prediction and optimization of an industrial scale chemical vapour deposition process

Machine Learning Seminar presentation

Topic: A data-driven approach for the prediction and optimization of an industrial scale chemical vapour deposition process

Speaker: Paris Papavasileiou, Faculty of Science, Technology and Medicine; University of Luxembourg

Time: Wednesday, 2021.10.20, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Chemical vapour deposition (CVD) processes are famous for their complexity. That becomes evident when all the transport mechanisms and chemical reactions that take place inside a CVD reactor are considered. The implementation of a Computational Fluid Dynamics (CFD) model for the simulation of such processes is possible, however, the aforementioned complexity usually leads to high computational costs. For this reason, a purely data-driven approach is investigated. Several supervised learning algorithms are tested and their performance on the dataset is compared. This approach allows for efficient and accurate predictions, for entire reactor geometries and different set-ups at a fraction of the time that even a simplified CFD model would require.

Additional material:

Soumianarayanan Vijayaraghavan: Neural-network acceleration of projection-based model-order-reduction for finite plasticity: Application to RVEs

Soumianarayanan Vijayaraghavan: Neural-network acceleration of projection-based model-order-reduction for finite plasticity: Application to RVEs

Machine Learning Seminar presentation

Topic: Neural-network acceleration of projection-based model-order-reduction for finite plasticity: Application to RVEs

Speaker: Soumianarayanan Vijayaraghavan, Faculty of Science, Technology and Medicine; University of Luxembourg

Time: Wednesday, 2021.10.06, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Compared to conventional projection-based model-order-reduction, its neural-network acceleration has the advantage that the online simulations are equation-free, meaning that no system of equations needs to be solved iteratively. Consequently, no stiffness matrix needs to be constructed and the stress update needs to be computed only once per increment. In this talk, I will present a recurrent neural network, that we developed to accelerate a projection-based model-order-reduction of the elastoplastic mechanical behavior of an RVE. The resulting speedup is substantial, whilst all microstructural information is preserved.

Additional material:

Saurabh Deshpande: Surrogate Deep Learning Framework for Real-Time Hyper-Elastic Simulations with Uncertainties

Saurabh Deshpande: Surrogate Deep Learning Framework for Real-Time Hyper-Elastic Simulations with Uncertainties

Machine Learning Seminar presentation

Topic: Surrogate Deep Learning Framework for Real-Time Hyper-Elastic Simulations with Uncertainties

Speaker: Saurabh Deshpande, Faculty of Science, Technology and Medicine; University of Luxembourg

Time: Wednesday, 2021.09.29, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Conventional finite element solvers are computationally expensive to solve non-linear partial differential equations, particularly they perform poorly in real time scale applications. In this work we propose a deep learning surrogate model which predicts nonlinear displacement solutions for hyper-elastic constitutive models in real time. We implement the Bayesian inference approach, thereby giving probabilistic predictions of displacement fields, capable of giving uncertainty of predictions. We implement our framework to benchmark hyper-elastic simulations to prove that it is extremely fast yet accurate.

Additional material:

Marharyta Aleksandrova: Conformal Prediction – Machine Learning with Accuracy Guarantees

Marharyta Aleksandrova: Conformal Prediction – Machine Learning with Accuracy Guarantees

Machine Learning Seminar presentation

Topic: Conformal Prediction – Machine Learning with Accuracy Guarantees

Speaker: Marharyta Aleksandrova, Faculty of Science, Technology and Medicine; University of Luxembourg

Time: Wednesday, 2021.09.22, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

The property of conformal predictors to guarantee the required accuracy rate, for example, 95%, makes this framework attractive in various practical applications. This property is achieved at a price of reduction in precision. In the case of conformal classification, the systems can output multiple class labels instead of one, in the case of classification – a numerical interval instead of one value. In this talk, we’ll discuss the theory behind conformal prediction and how the choice of a nonconformity function can influence the efficiency (how small the prediction set is) of the resulting classifier.

Additional material:

https://colab.research.google.com/drive/1c_vFcMWcx5GIb7bQG71FnHKqyscj9–6?usp=sharing

Mauro Dalle Lucca Tosi: TensAIR: an asynchronous and decentralized framework to distribute artificial neural networks training

Mauro Dalle Lucca Tosi: TensAIR: an asynchronous and decentralized framework to distribute artificial neural networks training

Machine Learning Seminar presentation

Topic: TensAIR: an asynchronous and decentralized framework to distribute artificial neural networks training

Speaker: Mauro Dalle Lucca Tosi, Faculty of Science, Technology and Medicine; University of Luxembourg

Time: Wednesday, 2021.09.15, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

In the last decades, artificial neural networks (ANNs) have drawn academy and industry attention for their ability to represent and solve complex problems. ANNs use algorithms based on stochastic gradient descent (SGD) to learn data patterns from training examples, which tends to be time-consuming. Researchers are studying how to distribute this computation across multiple GPUs to reduce training time. Modern implementations rely on synchronously scaling up resources or asynchronously scaling them out using a centralized communication network. However, both of these approaches have communication bottlenecks, which may impair their scaling time. In this research, we create TensAIR, a framework that scales out the training of sparse ANNs models in an asynchronously and decentralized manner. Due to the commutative properties of SGD updates, we linearly scaled out the number of gradients computed per second with minimal impairment on the convergence of the models – relative to the models’ sparseness. These results indicate that TensAIR enables the training of sparse neural networks in significantly less time. We conjecture that this article may inspire further studies on the usage of sparse ANNs on time-sensitive scenarios like online machine learning, which until now would not be considered feasible.

Additional material:

Alessandro Temperoni: Second-order methods for deep learning

Alessandro Temperoni: Second-order methods for deep learning

Machine Learning Seminar presentation

Topic: Second-order methods for deep learning

Speaker: Alessandro Temperoni, Faculty of Science, Technology and Medicine; University of Luxembourg

Time: Wednesday, 2021.07.14, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Second-order methods techniques are widely used in the field of optimization and have been recently applied to neural networks with impressive results. In this seminar, I will present two applications of second-order methods for deep learning.

Additional material: