Convolutional neural networks (CNNs) are increasingly being used in critical systems, where robustness and alignment are crucial. In this context, the field of explainable artificial intelligence has proposed the generation of high-level explanations of the prediction process of CNNs through concept extraction. While these methods can detect whether or not a concept is present in an image, they are unable to determine its location. What is more, a fair comparison of such approaches is difficult due to a lack of proper validation procedures. In this talk, we discuss a novel method for automatic concept extraction and localization based on representations obtained through pixel-wise aggregations of CNN activation maps. Further, we introduce a process for the validation of concept-extraction techniques based on synthetic datasets with pixel-wise annotations of their main components, reducing the need for human intervention.
In this presentation we describe a time-evolving digital twin and its application to a proof-of-concept engineering dynamics example. The digital twin is constructed by combining physics-based and data-based models of the physical twin, using a weighting technique. The resulting model combination enables the temporal evolution of the digital twin to be optimised based on the data recorded from the physical twin. This is achieved by creating digital twin output functions that are optimally-weighted combinations of physics- and/or data-based model components that can be updated over time to reflect the behaviour of the physical twin as accurately as possible. Approximate Bayesian computation (ABC) is used for the physics-based model in this work, on the premise that relatively simple physical models are only rarely available in the context of digital twins. For the data-based model, a nonlinear auto-regressive exogeneous (NARX) neural network model was used. The engineering dynamics example is a system consisting of two cascading tanks driven by a pump. The data received by the digital twin is segmented so that the process can be carried out over relatively short time-scales. In this example, the weightings are computed based on error and robustness criteria. The results show how the time-varying water level in the tanks can be captured with the digital twin output functions, and a comparison is made with three different weighting choice criteria.
Over the years many standards were defined to solve the security vulnerabilities of the Signaling Systems No:7 (SS7) protocol. Yet, it still suffers from many security issues that make it vulnerable to attacks such as the disclosure of subscribers’ location, fraud, and interception of calls or SMS messages. Several security measures employ rule-based solutions for the detection of attacks in telecom core networks. However, they become ineffective when an attacker deploys sophisticated attacks that are not easily detected by filtering mechanisms. One particular solution to overcome those attacks is to use supervised machine learning solutions since they have demonstrated their ability to achieve promising results to detect anomalies in various applications. Nonetheless, they generally need to be trained with a large set of labeled data that cannot be obtained from the telecom operators due to the excessive resource allocation and cost of labeling the network traffic. Therefore, in this work, we propose an innovative approach based on semi-supervised learning, which combines the use of labeled and unlabeled data to train a particular model for the detection of SS7 attacks in telecom core networks. Our approach adapts the Convolutional Neural Network (CNN)-based semi-supervised learning scheme and an improved version of the feature engineering in previous works together with the hyperparameter optimization. Experiment results show that the proposed approach achieves up to 100\% accuracy on both the real world and simulated datasets, respectively.
Data availability and advances in computing power nowadays have enabled a huge growth in Machine Learning (ML) research and practice. In this wave, many tools to build machine learning models have been developed. These tools are used by data scientists for fast model building and prototyping. But obtaining actual value from ML models requires their deployment into production environments. A repeatable, fast, and reliable deployment process is vital for successful ML workflows. To facilitate deployment, we have designed and developed a special-purpose compiler to automate the translation of ML models from their formal description into source code. The design of the compiler supports a dynamic architecture that allows many different types of models as inputs and many target programming languages as outputs. When compiling a ML model, we aim for running time efficiency of the deployed model on the target computer architecture. Therefore, not only we can deploy to several programming languages, but we also exploit specific underlying characteristics of the architecture such as multiple cores and the availability of graphic processing cards.
Artificial intelligence and Data-driven models are well suggested for modeling complex and non-linear processes. However, they require a very large computation time for data preparation, analysis and also for learning. Indeed, complex problems require an extended network that can have exceptionally long and tedious computations, especially at inference instants. To overcome these problems, we propose a hybrid technique based on multi-model structure. This structure is suggested for the modeling of nonlinear process by decomposing its nonlinear operating domain into a defined number of sub-models, each one representing a well-defined operating point. Thus, the multi-model concept is considered an interesting method to improve the performance of the model in terms of accuracy and without increasing too much the complexity of the empirical model, the training time or the number of parameters to be estimated. As an example, we present a comparative study between Artificial Neural Network (ANN), known as the most used and efficient technique in empirical modeling, and the proposed multimode approach. This comparative study will be applied to estimate the muscles forces of the forearm from the muscle’s activities.
The EU AI Act is a future regulation pushed by the European commission that aims at lowering and limiting the negative impact of AI on EU citizens.
After GDPR voted in 2016 and implemented in 2018, this new regulation is considered as the next game changer in the AI / ML / data science ecosystem, by making trustworthiness and accountability key principles when developing and operating AI-based systems. The regulation will largely rely on emerging AI standards to allow the regulatory text to remain valid under a dynamic evolution of the technological field.
While the regulation’s text is still under (heavy) discussion between EU institutions and interested bodies, one can get a grasp of the impact the future regulation will have on the design, development and commercialisation of AI systems in Europe and beyond.
What kind of AI systems and which organisations will be impacted by the regulation? What will be the obligations of AI system designers, operators, and users? Will it impact AI innovation in Europe, as stated by some groups of interest? These will be the topics of the talk and discussions.
Deep neural networks (DNNs) are routinely used in a wide range of application areas and scientific fields, as they allow to efficiently predict the behavior of complex systems. However, before the DNNs can be effectively used for the prediction, their parameters have to be determined during the training process. Traditionally, the training process is associated with the minimization of a loss function, which is commonly performed using variants of the stochastic gradient (SGD) method. Although SGD and its variants have a low computational cost per iteration, their convergence properties tend to deteriorate with increasing network size. In this talk, we will propose to alleviate the training cost of DNNs by leveraging nonlinear multilevel minimization methods. We will discuss how to construct a multilevel hierarchy and transfer operators by exploring the structure of the DNN architecture, properties of the loss function, and the form of the data. The dependency on a large number of hyper-parameters will be reduced by employing a trust-region globalization strategy. In this way, the sequence of step sizes will be induced automatically by the trust-region algorithm. Altogether, this will give rise to new classes of training methods, convergence properties of which will be analyzed using a series of numerical experiments.
Machine leaning (ML), an imposing but not necessarily new method, is living today its golden age by achieving unforeseen results in numerous industrial applications. On the other side, Space Weather (SW), which describes changing environmental conditions in near-Earth space, is becoming more and more important to our society. But how can SW benefit from the ongoing ML revolution? In the last decade, many researchers, like E. Camporeale, showed that SW possesses all the ingredients often required for a successful ML application. Using this large and freely data set of in situ and remote observations collected over several decades of space missions, it is possible to forecast and nowcast solar activity and thus protect our increasing satellites constellations and us!
In this talk, we will give a warm introduction to this field and point out a number of open challenges that we believe is worth to discuss and to undertake.
Artificial Neural Networks (ANNs) have aroused great interest of researchers as an option to solve or approximate complex engineering problems. In Anitescu et al. , it has been used to solve boundary value problem (BVP) for second order partial differential equations (PDEs) based on minimizing the combined error in PDE at multiple collocation points inside the domain and at the boundary conditions. In such an approach, ANNs have shown the potential of being an alternative of finite element method (FEM) which is commonly used for solving BVPs.
The boundary element method (BEM) has been a common alternative of FEM in solving BVPs. In many practical applications, BVPs could be transformed to a boundary integral equation (BIE) and then solved by BEM. BEM has several commendable advantages over domain-type methods, such as lowering the dimension of the problem to reduce the computational cost and avoiding domain truncation error in exterior domains. In Simpson et al. , an isogeometric boundary element method (IGABEM) was proposed, which combines BEM with Non-Uniform Rational B-Splines (NURBS) and further improves the accuracy of BEM.
In this work, we proposed a new approach for solving BVPs formulated as BIEs. The approach consists in using deep neural network to approximate the solution of BIEs. Loss function is set to estimate the error at the collocation points and is subsequently minimized by the appropriate weights and biases. The approach preserves the main advantage of IGABEM, i.e. using NURBS to describe the boundary, hence keeping the exact geometry and providing the tight link with CAD. The approach inherits the main advantages of the boundary-type methods and allows to reduce the computational cost by using collocation points on the boundary only, which is very beneficial in practical applications where only the boundary data measurements are available.
Application of the method to some benchmark problems for the Laplace equation is demonstrated. The results are presented in terms of the loss function convergence plots and error norms. A detailed parametric study is presented to evaluate the performance of the method. It is shown that the method is highly accurate for Dirichlet, Neumann and mixed problems in continuous domain, however the accuracy deteriorates in presence of corners.
In future studies, accuracy of the method can be improved by varying the choice of the activation function and the network architecture. The method can be extended to other applications by changing the BIE. In the current study, the method was implemented for 2D applications, but extension to 3D is also possible and it is an objective of future works.
 Anitescu et al. Artificial neural network methods for the solution of second order boundary value problems. 2019
 Simpson et al. A two-dimensional isogeometric boundary element method for elastostatic analysis. 2012
Physics-informed neural networks (PINNs) have been studied and used widely in the field of computational mechanics. In this study, we proposed a new approach to merge PINNs with (geometry-based) constrained neural networks. Using this method, we reconstruct the velocity fields of particles moving in the fixed bed. Following that, the trained neural network is used to predict the motion of particles for a period of time more than the training time, and the results are compared with the simulation data in terms of accuracy and CPU time.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.