References of "Hermans, Joeri"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailAdversarial Variational Optimization of Non-Differentiable Simulators
Louppe, Gilles ULiege; Hermans, Joeri ULiege; Cranmer, Kyle

in Proceedings of Machine Learning Research (2019, April)

Complex computer simulators are increasingly used across fields of science as generative models tying parameters of an underlying theory to experimental observations. Inference in this setup is often ... [more ▼]

Complex computer simulators are increasingly used across fields of science as generative models tying parameters of an underlying theory to experimental observations. Inference in this setup is often difficult, as simulators rarely admit a tractable density or likelihood function. We introduce Adversarial Variational Optimization (AVO), a likelihood-free inference algorithm for fitting a non-differentiable generative model incorporating ideas from generative adversarial networks, variational optimization and empirical Bayes. We adapt the training procedure of Wasserstein GANs by replacing the differentiable generative network with a domain-specific simulator. We solve the resulting non-differentiable minimax problem by minimizing variational upper bounds of the two adversarial objectives. Effectively, the procedure results in learning a proposal distribution over simulator parameters, such that the Wasserstein distance between the marginal distribution of the synthetic data and the empirical distribution of observed data is minimized. We present results of the method with simulators producing both discrete and continuous data. [less ▲]

Detailed reference viewed: 15 (2 ULiège)
Full Text
See detailLikelihood-free MCMC with Approximate Likelihood Ratios
Hermans, Joeri ULiege; Begy, Volodimir; Louppe, Gilles ULiege

E-print/Working paper (2019)

We propose a novel approach for posterior sampling with intractable likelihoods. This is an increasingly important problem in scientific applications where models are implemented as sophisticated computer ... [more ▼]

We propose a novel approach for posterior sampling with intractable likelihoods. This is an increasingly important problem in scientific applications where models are implemented as sophisticated computer simulations. As a result, tractable densities are not available, which forces practitioners to rely on approximations during inference. We address the intractability of densities by training a parameterized classifier whose output is used to approximate likelihood ratios between arbitrary model parameters. In turn, we are able to draw posterior samples by plugging this approximator into common Markov chain Monte Carlo samplers such as Metropolis-Hastings and Hamiltonian Monte Carlo. We demonstrate the proposed technique by fitting the generating parameters of implicit models, ranging from a linear probabilistic model to settings in high energy physics with high-dimensional observations. Finally, we discuss several diagnostics to assess the quality of the posterior. [less ▲]

Detailed reference viewed: 15 (1 ULiège)
Full Text
See detailGradient Energy Matching for Distributed Asynchronous Gradient Descent
Hermans, Joeri ULiege; Louppe, Gilles ULiege

E-print/Working paper (2018)

Distributed asynchronous SGD has become widely used for deep learning in large-scale systems, but remains notorious for its instability when increasing the number of workers. In this work, we study the ... [more ▼]

Distributed asynchronous SGD has become widely used for deep learning in large-scale systems, but remains notorious for its instability when increasing the number of workers. In this work, we study the dynamics of distributed asynchronous SGD under the lens of Lagrangian mechanics. Using this description, we introduce the concept of energy to describe the optimization process and derive a sufficient condition ensuring its stability as long as the collective energy induced by the active workers remains below the energy of a target synchronous process. Making use of this criterion, we derive a stable distributed asynchronous optimization procedure, GEM, that estimates and maintains the energy of the asynchronous system below or equal to the energy of sequential SGD with momentum. Experimental results highlight the stability and speedup of GEM compared to existing schemes, even when scaling to one hundred asynchronous workers. Results also indicate better generalization compared to the targeted SGD with momentum. [less ▲]

Detailed reference viewed: 17 (1 ULiège)