References of "Louppe, Gilles"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailRecurrent machines for likelihood-free inference
Pesah, Arthur; Wehenkel, Antoine ULiege; Louppe, Gilles ULiege

Conference (2018, December 08)

Likelihood-free inference is concerned with the estimation of the parameters of a non-differentiable stochastic simulator that best reproduce real observations. In the absence of a likelihood function ... [more ▼]

Likelihood-free inference is concerned with the estimation of the parameters of a non-differentiable stochastic simulator that best reproduce real observations. In the absence of a likelihood function, most of the existing inference methods optimize the simulator parameters through a handcrafted iterative procedure that tries to make the simulated data more similar to the observations. In this work, we explore whether meta-learning can be used in the likelihood-free context, for learning automatically from data an iterative optimization procedure that would solve likelihood-free inference problems. We design a recurrent inference machine that learns a sequence of parameter updates leading to good parameter estimates, without ever specifying some explicit notion of divergence between the simulated data and the real data distributions. We demonstrate our approach on toy simulators, showing promising results both in terms of performance and robustness. [less ▲]

Detailed reference viewed: 30 (6 ULiège)
Full Text
Peer Reviewed
See detailDeep Quality Value (DQV) Learning
Sabatelli, Matthia ULiege; Louppe, Gilles ULiege; Geurts, Pierre ULiege et al

in Advances in Neural Information Processing Systems, Deep Reinforcement Learning Workshop (2018)

We introduce a novel Deep Reinforcement Learning (DRL) algorithm called Deep Quality-Value (DQV) Learning. DQV uses temporal-difference learning to train a Value neural network and uses this network for ... [more ▼]

We introduce a novel Deep Reinforcement Learning (DRL) algorithm called Deep Quality-Value (DQV) Learning. DQV uses temporal-difference learning to train a Value neural network and uses this network for training a second Quality-value network that learns to estimate state-action values. We first test DQV’s update rules with Multilayer Perceptrons as function approximators on two classic RL problems, and then extend DQV with the use of Deep Convolutional Neural Networks, ‘Experience Replay’ and ‘Target Neural Networks’ for tackling four games of the Atari Arcade Learning environment. Our results show that DQV learns significantly faster and better than Deep Q-Learning and Double Deep Q-Learning, suggesting that our algorithm can potentially be a better performing synchronous temporal difference algorithm than what is currently present in DRL. [less ▲]

Detailed reference viewed: 57 (15 ULiège)
Full Text
See detailLikelihood-free inference with an improved cross-entropy estimator
Stoye, Markus; Brehmer, Johann; Louppe, Gilles ULiege et al

E-print/Working paper (2018)

We extend recent work (Brehmer, et. al., 2018) that use neural networks as surrogate models for likelihood-free inference. As in the previous work, we exploit the fact that the joint likelihood ratio and ... [more ▼]

We extend recent work (Brehmer, et. al., 2018) that use neural networks as surrogate models for likelihood-free inference. As in the previous work, we exploit the fact that the joint likelihood ratio and joint score, conditioned on both observed and latent variables, can often be extracted from an implicit generative model or simulator to augment the training data for these surrogate models. We show how this augmented training data can be used to provide a new cross-entropy estimator, which provides improved sample efficiency compared to previous loss functions exploiting this augmented training data. [less ▲]

Detailed reference viewed: 18 (4 ULiège)
Full Text
See detailEfficient Probabilistic Inference in the Quest for Physics Beyond the Standard Model
Gunes Baydin, Atilim; Heinrich, Lukas; Bhimji, Wahid et al

E-print/Working paper (2018)

We present a novel framework that enables efficient probabilistic inference in large-scale scientific models by allowing the execution of existing domain-specific simulators as probabilistic programs ... [more ▼]

We present a novel framework that enables efficient probabilistic inference in large-scale scientific models by allowing the execution of existing domain-specific simulators as probabilistic programs, resulting in highly interpretable posterior inference. Our framework is general purpose and scalable, and is based on a cross-platform probabilistic execution protocol through which an inference engine can control simulators in a language-agnostic way. We demonstrate the technique in particle physics, on a scientifically accurate simulation of the tau lepton decay, which is a key ingredient in establishing the properties of the Higgs boson. High-energy physics has a rich set of simulators based on quantum field theory and the interaction of particles in matter. We show how to use probabilistic programming to perform Bayesian inference in these existing simulator codebases directly, in particular conditioning on observable outputs from a simulated particle detector to directly produce an interpretable posterior distribution over decay pathways. Inference efficiency is achieved via inference compilation where a deep recurrent neural network is trained to parameterize proposal distributions and control the stochastic simulator in a sequential importance sampling scheme, at a fraction of the computational cost of Markov chain Monte Carlo sampling. [less ▲]

Detailed reference viewed: 15 (1 ULiège)
Full Text
See detailDeep generative models for fast shower simulation in ATLAS
The ATLAS collaboration; Louppe, Gilles ULiege

E-print/Working paper (2018)

Detailed reference viewed: 90 (4 ULiège)
Full Text
See detailMachine Learning in High Energy Physics Community White Paper
Albertsson, Kim; Altoe, Piero; Anderson, Dustin et al

E-print/Working paper (2018)

Machine learning is an important research area in particle physics, beginning with applications to high-level physics analysis in the 1990s and 2000s, followed by an explosion of applications in particle ... [more ▼]

Machine learning is an important research area in particle physics, beginning with applications to high-level physics analysis in the 1990s and 2000s, followed by an explosion of applications in particle and event identification and reconstruction in the 2010s. In this document we discuss promising future research and development areas in machine learning in particle physics with a roadmap for their implementation, software and hardware resource requirements, collaborative initiatives with the data science community, academia and industry, and training the particle physics community in data science. The main objective of the document is to connect and motivate these areas of research and development with the physics drivers of the High-Luminosity Large Hadron Collider and future neutrino experiments and identify the resource needs for their implementation. Additionally we identify areas where collaboration with external communities will be of great benefit. [less ▲]

Detailed reference viewed: 35 (4 ULiège)
Peer Reviewed
See detailNew approaches using machine learning for fast shower simulation in ATLAS
Hasib, Ahmed; Schaarschmidt, Jana; Gadatsch, Stefan et al

Conference (2018, July 05)

Detailed reference viewed: 32 (1 ULiège)
See detailLikelihood-free inference, effectively
Louppe, Gilles ULiege

Conference (2018, June 22)

Detailed reference viewed: 15 (3 ULiège)
See detailDeep Learning: Past, present and future
Louppe, Gilles ULiege

Conference given outside the academic context (2018)

Detailed reference viewed: 19 (2 ULiège)
Full Text
See detailGradient Energy Matching for Distributed Asynchronous Gradient Descent
Hermans, Joeri ULiege; Louppe, Gilles ULiege

E-print/Working paper (2018)

Distributed asynchronous SGD has become widely used for deep learning in large-scale systems, but remains notorious for its instability when increasing the number of workers. In this work, we study the ... [more ▼]

Distributed asynchronous SGD has become widely used for deep learning in large-scale systems, but remains notorious for its instability when increasing the number of workers. In this work, we study the dynamics of distributed asynchronous SGD under the lens of Lagrangian mechanics. Using this description, we introduce the concept of energy to describe the optimization process and derive a sufficient condition ensuring its stability as long as the collective energy induced by the active workers remains below the energy of a target synchronous process. Making use of this criterion, we derive a stable distributed asynchronous optimization procedure, GEM, that estimates and maintains the energy of the asynchronous system below or equal to the energy of sequential SGD with momentum. Experimental results highlight the stability and speedup of GEM compared to existing schemes, even when scaling to one hundred asynchronous workers. Results also indicate better generalization compared to the targeted SGD with momentum. [less ▲]

Detailed reference viewed: 16 (1 ULiège)
See detailDeep Learning: Past, present and future
Louppe, Gilles ULiege

Conference given outside the academic context (2018)

Detailed reference viewed: 19 (2 ULiège)
Full Text
See detailMining gold from implicit models to improve likelihood-free inference
Brehmer, Johann; Louppe, Gilles ULiege; Pavez, Juan et al

E-print/Working paper (2018)

Simulators often provide the best description of real-world phenomena; however, they also lead to challenging inverse problems because the density they implicitly define is often intractable. We present a ... [more ▼]

Simulators often provide the best description of real-world phenomena; however, they also lead to challenging inverse problems because the density they implicitly define is often intractable. We present a new suite of simulation-based inference techniques that go beyond the traditional Approximate Bayesian Computation approach, which struggles in a high-dimensional setting, and extend methods that use surrogate models based on neural networks. We show that additional information, such as the joint likelihood ratio and the joint score, can often be extracted from simulators and used to augment the training data for these surrogate models. Finally, we demonstrate that these new techniques are more sample efficient and provide higher-fidelity inference than traditional methods. [less ▲]

Detailed reference viewed: 16 (2 ULiège)
Full Text
See detailA Guide to Constraining Effective Field Theories with Machine Learning
Brehmer, Johann; Cranmer, Kyle; Louppe, Gilles ULiege et al

E-print/Working paper (2018)

We develop, discuss, and compare several inference techniques to constrain theory parameters in collider experiments. By harnessing the latent-space structure of particle physics processes, we extract ... [more ▼]

We develop, discuss, and compare several inference techniques to constrain theory parameters in collider experiments. By harnessing the latent-space structure of particle physics processes, we extract extra information from the simulator. This augmented data can be used to train neural networks that precisely estimate the likelihood ratio. The new methods scale well to many observables and high-dimensional parameter spaces, do not require any approximations of the parton shower and detector response, and can be evaluated in microseconds. Using weak-boson-fusion Higgs production as an example process, we compare the performance of several techniques. The best results are found for likelihood ratio estimators trained with extra information about the score, the gradient of the log likelihood function with respect to the theory parameters. The score also provides sufficient statistics that contain all the information needed for inference in the neighborhood of the Standard Model. These methods enable us to put significantly stronger bounds on effective dimension-six operators than the traditional approach based on histograms. They also outperform generic machine learning methods that do not make use of the particle physics structure, demonstrating their potential to substantially improve the new physics reach of the LHC legacy results. [less ▲]

Detailed reference viewed: 27 (2 ULiège)
Full Text
See detailConstraining Effective Field Theories with Machine Learning
Brehmer, Johann; Cranmer, Kyle; Louppe, Gilles ULiege et al

E-print/Working paper (2018)

We present powerful new analysis techniques to constrain effective field theories at the LHC. By leveraging the structure of particle physics processes, we extract extra information from Monte-Carlo ... [more ▼]

We present powerful new analysis techniques to constrain effective field theories at the LHC. By leveraging the structure of particle physics processes, we extract extra information from Monte-Carlo simulations, which can be used to train neural network models that estimate the likelihood ratio. These methods scale well to processes with many observables and theory parameters, do not require any approximations of the parton shower or detector response, and can be evaluated in microseconds. We show that they allow us to put significantly stronger bounds on dimension-six operators than existing methods, demonstrating their potential to improve the precision of the LHC legacy constraints. [less ▲]

Detailed reference viewed: 28 (1 ULiège)
See detailAdversarial Games for Particle Physics
Louppe, Gilles ULiege

Conference (2018, January 18)

Detailed reference viewed: 15 (1 ULiège)
Full Text
Peer Reviewed
See detailRobust EEG-based cross-site and cross-protocol classification of states of consciousness
Engemann, D. A.; Raimondo, Federico ULiege; King, J.-R. et al

in Brain: a Journal of Neurology (2018), 141(11), 3179-3192

Determining the state of consciousness in patients with disorders of consciousness is a challenging practical and theoretical problem. Recent findings suggest that multiple markers of brain activity ... [more ▼]

Determining the state of consciousness in patients with disorders of consciousness is a challenging practical and theoretical problem. Recent findings suggest that multiple markers of brain activity extracted from the EEG may index the state of consciousness in the human brain. Furthermore, machine learning has been found to optimize their capacity to discriminate different states of consciousness in clinical practice. However, it is unknown how dependable these EEG markers are in the face of signal variability because of different EEG configurations, EEG protocols and subpopulations from different centres encountered in practice. In this study we analysed 327 recordings of patients with disorders of consciousness (148 unresponsive wakefulness syndrome and 179 minimally conscious state) and 66 healthy controls obtained in two independent research centres (Paris Pitié-Salpêtrière and Liège). We first show that a non-parametric classifier based on ensembles of decision trees provides robust out-of-sample performance on unseen data with a predictive area under the curve (AUC) of ~0.77 that was only marginally affected when using alternative EEG configurations (different numbers and positions of sensors, numbers of epochs, average AUC = 0.750 ± 0.014). In a second step, we observed that classifiers based on multiple as well as single EEG features generalize to recordings obtained from different patient cohorts, EEG protocols and different centres. However, the multivariate model always performed best with a predictive AUC of 0.73 for generalization from Paris 1 to Paris 2 datasets, and an AUC of 0.78 from Paris to Liège datasets. Using simulations, we subsequently demonstrate that multivariate pattern classification has a decisive performance advantage over univariate classification as the stability of EEG features decreases, as different EEG configurations are used for feature-extraction or as noise is added. Moreover, we show that the generalization performance from Paris to Liège remains stable even if up to 20% of the diagnostic labels are randomly flipped. Finally, consistent with recent literature, analysis of the learned decision rules of our classifier suggested that markers related to dynamic fluctuations in theta and alpha frequency bands carried independent information and were most influential. Our findings demonstrate that EEG markers of consciousness can be reliably, economically and automatically identified with machine learning in various clinical and acquisition contexts. [less ▲]

Detailed reference viewed: 40 (9 ULiège)
Full Text
Peer Reviewed
See detailRandom Subspace with Trees for Feature Selection Under Memory Constraints
Sutera, Antonio ULiege; Châtel, Célia; Louppe, Gilles ULiege et al

in Storkey, Amos; Perez-Cruz, Fernando (Eds.) Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics (2018)

Dealing with datasets of very high dimension is a major challenge in machine learning. In this paper, we consider the problem of feature selection in applications where the memory is not large enough to ... [more ▼]

Dealing with datasets of very high dimension is a major challenge in machine learning. In this paper, we consider the problem of feature selection in applications where the memory is not large enough to contain all features. In this setting, we propose a novel tree-based feature selection approach that builds a sequence of randomized trees on small subsamples of variables mixing both variables already identified as relevant by previous models and variables randomly selected among the other variables. As our main contribution, we provide an in-depth theoretical analysis of this method in infinite sample setting. In particular, we study its soundness with respect to common definitions of feature relevance and its convergence speed under various variable dependance scenarios. We also provide some preliminary empirical results highlighting the potential of the approach. [less ▲]

Detailed reference viewed: 48 (15 ULiège)
Full Text
See detailImprovements to Inference Compilation for Probabilistic Programming in Large-Scale Scientific Simulators
Lezcano Casado, Mario; Gunes Baydin, Atilim; Martinez Rubio, David et al

E-print/Working paper (2017)

We consider the problem of Bayesian inference in the family of probabilistic models implicitly defined by stochastic generative models of data. In scientific fields ranging from population biology to ... [more ▼]

We consider the problem of Bayesian inference in the family of probabilistic models implicitly defined by stochastic generative models of data. In scientific fields ranging from population biology to cosmology, low-level mechanistic components are composed to create complex generative models. These models lead to intractable likelihoods and are typically non-differentiable, which poses challenges for traditional approaches to inference. We extend previous work in "inference compilation", which combines universal probabilistic programming and deep learning methods, to large-scale scientific simulators, and introduce a C++ based probabilistic programming library called CPProb. We successfully use CPProb to interface with SHERPA, a large code-base used in particle physics. Here we describe the technical innovations realized and planned for this library. [less ▲]

Detailed reference viewed: 14 (1 ULiège)
See detailAdversarial Games for Particle Physics
Louppe, Gilles ULiege

Conference (2017, December 08)

Detailed reference viewed: 11 (1 ULiège)
Peer Reviewed
See detailNeural Message Passing for Jet Physics
Henrion, Isaac; Brehmer, Johann; Bruna, Joan et al

Conference (2017, December 08)

Detailed reference viewed: 16 (1 ULiège)