Samuel Renault: AI Act: a grasp of the future EU regulation on AI and its impact on AI practitioners

Samuel Renault: AI Act: a grasp of the future EU regulation on AI and its impact on AI practitioners

Machine Learning Seminar presentation

Topic: AI Act: a grasp of the future EU regulation on AI and its impact on AI practitioners.

Speaker: Samuel Renault, Luxembourg Institute of Science and Technology (LIST)

Time: Wednesday, 2022.09.28 10:00 am CET

How to join: Please contact Jakub Lengiewicz

Abstract:

The EU AI Act is a future regulation pushed by the European commission that aims at lowering and limiting the negative impact of AI on EU citizens.
After GDPR voted in 2016 and implemented in 2018, this new regulation is considered as the next game changer in the AI / ML / data science ecosystem, by making trustworthiness and accountability key principles when developing and operating AI-based systems. The regulation will largely rely on emerging AI standards to allow the regulatory text to remain valid under a dynamic evolution of the technological field.
While the regulation’s text is still under (heavy) discussion between EU institutions and interested bodies, one can get a grasp of the impact the future regulation will have on the design, development and commercialisation of AI systems in Europe and beyond.
What kind of AI systems and which organisations will be impacted by the regulation? What will be the obligations of AI system designers, operators, and users? Will it impact AI innovation in Europe, as stated by some groups of interest? These will be the topics of the talk and discussions.

Additional material

Presentation: https://legato-team.eu/wp-content/uploads/2022/09/AI-Act_Legato-ML-Webinar_20220928.pdf

Alena Kopanicakova: Multilevel training of deep neural networks

Alena Kopanicakova: Multilevel training of deep neural networks

Machine Learning Seminar presentation

Topic: Multilevel training of deep neural networks.

Speaker: Alena Kopanicakova, Division of Applied Mathematics, Brown University, Providence, USA

Time: Wednesday, 2022.09.21 3:00 pm CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Deep neural networks (DNNs) are routinely used in a wide range of application areas and scientific fields, as they allow to efficiently predict the behavior of complex systems. However, before the DNNs can be effectively used for the prediction, their parameters have to be determined during the training process. Traditionally, the training process is associated with the minimization of a loss function, which is commonly performed using variants of the stochastic gradient (SGD) method. Although SGD and its variants have a low computational cost per iteration, their convergence properties tend to deteriorate with increasing network size. In this talk, we will propose to alleviate the training cost of DNNs by leveraging nonlinear multilevel minimization methods. We will discuss how to construct a multilevel hierarchy and transfer operators by exploring the structure of the DNN architecture, properties of the loss function, and the form of the data. The dependency on a large number of hyper-parameters will be reduced by employing a trust-region globalization strategy. In this way, the sequence of step sizes will be induced automatically by the trust-region algorithm. Altogether, this will give rise to new classes of training methods, convergence properties of which will be analyzed using a series of numerical experiments.

Additional material

Video recording: https://youtu.be/B2vPQBHR5f8

Internship offer: IsoGeometric Analysis (IGA) implementation in the open-source software SOFA

Dear all,
We are offering an internship position for 4 to 6 months at the University of Luxembourg, in the Legato team led by Prof. Stéphane P.A. Bordas (FLSW). The subject deals with implementing the IsoGeometric Analysis in the SOFA Framework. Join a dynamic team and an open-source consortium! More details are provided in the attached document.

Internship_IGA_SOFA_2022

Guendalina Palmirotta: Machine Learning in Space Weather and how to handle it

Guendalina Palmirotta: Machine Learning in Space Weather and how to handle it

Machine Learning Seminar presentation

Topic: Machine Learning in Space Weather and how to handle it.

Speaker: Guendalina Palmirotta, European Space Agency, Germany

Time: Wednesday, 2022.07.06, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Machine leaning (ML), an imposing but not necessarily new method, is living today its golden age by achieving unforeseen results in numerous industrial applications. On the other side, Space Weather (SW), which describes changing environmental conditions in near-Earth space, is becoming more and more important to our society. But how can SW benefit from the ongoing ML revolution? In the last decade, many researchers, like E. Camporeale, showed that SW possesses all the ingredients often required for a successful ML application. Using this large and freely data set of in situ and remote observations collected over several decades of space missions, it is possible to forecast and nowcast solar activity and thus protect our increasing satellites constellations and us!
In this talk, we will give a warm introduction to this field and point out a number of open challenges that we believe is worth to discuss and to undertake.

Additional material

Presentation slides: MLseminarSW22_slides

Han Zhang: Artificial Neural Network Method Based on Boundary Integral Equations

Han Zhang: Artificial Neural Network Method Based on Boundary Integral Equations

Machine Learning Seminar presentation

Topic: Artificial Neural Network Method Based on Boundary Integral Equations.

Speaker: Han Zhang, University of New South Wales (UNSW Sydney), Australia

Time: Wednesday, 2022.06.29, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Artificial Neural Networks (ANNs) have aroused great interest of researchers as an option to solve or approximate complex engineering problems. In Anitescu et al. [1], it has been used to solve boundary value problem (BVP) for second order partial differential equations (PDEs) based on minimizing the combined error in PDE at multiple collocation points inside the domain and at the boundary conditions. In such an approach, ANNs have shown the potential of being an alternative of finite element method (FEM) which is commonly used for solving BVPs.

The boundary element method (BEM) has been a common alternative of FEM in solving BVPs. In many practical applications, BVPs could be transformed to a boundary integral equation (BIE) and then solved by BEM. BEM has several commendable advantages over domain-type methods, such as lowering the dimension of the problem to reduce the computational cost and avoiding domain truncation error in exterior domains. In Simpson et al. [2], an isogeometric boundary element method (IGABEM) was proposed, which combines BEM with Non-Uniform Rational B-Splines (NURBS) and further improves the accuracy of BEM.

In this work, we proposed a new approach for solving BVPs formulated as BIEs. The approach consists in using deep neural network to approximate the solution of BIEs. Loss function is set to estimate the error at the collocation points and is subsequently minimized by the appropriate weights and biases. The approach preserves the main advantage of IGABEM, i.e. using NURBS to describe the boundary, hence keeping the exact geometry and providing the tight link with CAD. The approach inherits the main advantages of the boundary-type methods and allows to reduce the computational cost by using collocation points on the boundary only, which is very beneficial in practical applications where only the boundary data measurements are available.

Application of the method to some benchmark problems for the Laplace equation is demonstrated. The results are presented in terms of the loss function convergence plots and error norms. A detailed parametric study is presented to evaluate the performance of the method. It is shown that the method is highly accurate for Dirichlet, Neumann and mixed problems in continuous domain, however the accuracy deteriorates in presence of corners.

In future studies, accuracy of the method can be improved by varying the choice of the activation function and the network architecture. The method can be extended to other applications by changing the BIE. In the current study, the method was implemented for 2D applications, but extension to 3D is also possible and it is an objective of future works.

Additional material

[1] Anitescu et al. Artificial neural network methods for the solution of second order boundary value problems. 2019
[2] Simpson et al. A two-dimensional isogeometric boundary element method for elastostatic analysis. 2012

Video recording: https://youtu.be/iOPV_4gmWGw

Fateme Darlik: Using Physics-Informed constrained neural network to reconstruct the motion of the particles for time more than the training time

Fateme Darlik: Using Physics-Informed constrained neural network to reconstruct the motion of the particles for time more than the training time

Machine Learning Seminar presentation

Topic: Using Physics-Informed constrained neural network to reconstruct the motion of the particles for time more than the training time.

Speaker: Fateme Darlik, Faculty of Science Technology and Medicine, University of Luxembourg.

Time: Wednesday, 2022.06.22, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Physics-informed neural networks (PINNs) have been studied and used widely in the field of computational mechanics. In this study, we proposed a new approach to merge PINNs with (geometry-based) constrained neural networks. Using this method, we reconstruct the velocity fields of particles moving in the fixed bed. Following that, the trained neural network is used to predict the motion of particles for a period of time more than the training time, and the results are compared with the simulation data in terms of accuracy and CPU time.

Additional material

Video recording: https://youtu.be/yQXZ2tNT-Mw

Aishwarya Krishnan: Design of a chat bot for service-now knowledge base

Aishwarya Krishnan: Design of a chat bot for service-now knowledge base

Machine Learning Seminar presentation

Topic: Design of a chat bot for service-now knowledge base.

Speaker: Aishwarya Krishnan, Faculty of Science Technology and Medicine, University of Luxembourg

Time: Wednesday, 2022.06.01, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Service now is an important tool in an industry that helps internal employees to report and solve their issues. In an organization that receives an average of 7000+ IT related incidents in six months, human assisted resolving of issues is cumbersome. Therefore, my thesis focuses on designing an artificial intelligence assisted chat-bot that uses machine learning algorithms to bring up a precise solution that could bring an automated response to the user. In this talk I will be discussing about the challenges that we face in an industrial setting, the roughness of data, prepossessing them and finally retrieving the relevant information from the data. I will be discussing about the libraries that was used to simplify the text data and the type conversions involved. I propose to discuss in this presentation about the input given to the Bert model and the working of the transformer model itself.

Additional material:

Video recording: https://youtu.be/1WqA_KL7lPM

Lorella Viola: The problem with GPT for the Humanities (and for humanity)

Lorella Viola: The problem with GPT for the Humanities (and for humanity)

Machine Learning Seminar presentation

Topic: The problem with GPT for the Humanities (and for humanity).

Speaker: Lorella Viola, Luxembourg Centre for Contemporary and Digital History (C2DH ), University of Luxembourg

Time: Wednesday, 2022.05.11, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

GPT models (Generative Pre-trained Transformer) have increasingly become a popular choice by researchers and practitioners. Their success is mostly due to the technology’s ability to move beyond single word predictions. Indeed, unlike in traditional neural network language models, GPT generates text by looking at the entirety of the input text. Thus, rather than determining relevance sequentially by looking at the most recent segment of input, GPT models determine each word’s relevance selectively. However, if on the one hand this ability allows the machine to ‘learn’ faster, the datasets used for training have to be fed as one single document, meaning that all metadata information is inevitably lost (e.g., date, authors, original source). Moreover, as GPT models are trained on crawled, English web material from 2016, these models are not only ignorant of the world prior to this date, but they also express the language as used exclusively by English-speaking users (mostly white, young males). They also expect data pristine in quality, in the sense that these models have been trained on digitally-born material which do not present the typical problems of digitized, historical content (e.g., OCR mistakes, unusual fonts). Although a powerful technology, these issues seriously hinder its application for humanistic enquiry, particularly historical. In this presentation, I discuss these and other problematic aspects of GPT and I present the specific challenges I encountered while working on a historical archive of Italian American immigrant newspapers.

Additional material:

Video recording: https://youtu.be/J1zkS1mKGlE

Aubin Geoffre: Gaussian Process Regression with anisotropic kernel for supervised feature selection.

Aubin Geoffre: Gaussian Process Regression with anisotropic kernel for supervised feature selection.

Machine Learning Seminar presentation

Topic: Gaussian Process Regression with anisotropic kernel for supervised feature selection.

Speaker: Aubin Geoffre, École Mines de Saint-Étienne

Time: Wednesday, 2022.05.04, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

Studying flows in random porous media leads to consider a permeability tensor which directly depends on the pore geometry. The latter can be characterised through the computation of various morphological parameters: Delaunay triangulation characteristics, nearest neighbour distance,… The natural question is: which morphological parameters provide the best statistical description of permeability? This question can be difficult to answer for several reasons: non-linear correlation between input parameters, non-linear correlation between inputs and outputs, small dataset, variability,…
A method of feature selection based on Gaussian Process Regression has been proposed. It can be applied to a wide range of applications where the parameters that best explain a given output are sought among a set of correlated features. The method uses anisotropic kernel that associates a hyperparameter to each feature. These hyperparameters can be interpreted as correlation lengths providing an estimation of the weight of each feature w.r.t the output.

Additional material:

Video recording: https://youtu.be/S7_RcE2WmGk

Presentation: https://legato-team.eu/wp-content/uploads/2022/05/seminar_legato.pdf

Vu Chau: Non-parametric data-driven constitutive modelling using artificial neural networks.

Vu Chau: Non-parametric data-driven constitutive modelling using artificial neural networks.

Machine Learning Seminar presentation

Topic: Non-parametric data-driven constitutive modelling using artificial neural networks.

Speaker: Vu Chau, Department of Engineering, FSTM, University of Luxembourg

Time: Wednesday, 2022.04.13, 10:00 CET

How to join: Please contact Jakub Lengiewicz

Abstract:

This seminar talk addresses certain challenges associated with data-driven modelling of advanced materials — with special interest in the non-linear deformation response of rubber-like materials, soft polymers or biological tissue. The underlying (isotropic) hyperelastic deformation problem is formulated in the principal space, using principal stretches and principal stresses. The sought data-driven constitutive relation is expressed in terms of these principal quantities and to be captured by a non-parametric representation using a trained artificial neural network (ANN).
The presentation investigates certain physics-motivated consistency requirements (e.g. limit behaviour, monotonicity) for the ANN-based prediction of principal stresses for given principal stretches, and discusses the implications on the architecture of such constitutive ANNs. The neural network is exemplarily constructed, trained and tested using PyTorch.
The computational embedding of the data-driven material descriptor is demonstrated for the open-source finite element framework FEniCS which builds on the symbolic representation of the constitutive ANN operator in the Unified Form Language (UFL). We discuss the performance of the overall formulation within the non-linear solution process and will explain some future directions of research.

Additional material:

Video recording: https://youtu.be/Vn0fLGHAkbk