[en] Among likelihood-based approaches for deep generative modelling, variational autoencoders (VAEs) offer scalable amortized posterior inference and fast sampling. However, VAEs are also more and more outperformed by competing models such as normalizing flows (NFs), deep-energy models, or the new denoising diffusion probabilistic models (DDPMs). In this preliminary work, we improve VAEs by demonstrating how DDPMs can be used for modelling the prior distribution of the latent variables. The diffusion prior model improves upon Gaussian priors of classical VAEs and is competitive with NF-based priors. Finally, we hypothesize that hierarchical VAEs could similarly benefit from the enhanced capacity of diffusion priors.
Disciplines :
Computer science
Author, co-author :
Wehenkel, Antoine ; Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Big Data
Louppe, Gilles ; Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Big Data
Language :
English
Title :
Diffusion Priors In Variational Autoencoders
Alternative titles :
[en] Diffusion Priors In Variational Autoencoders
Publication date :
July 2021
Number of pages :
6
Event name :
ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models
Event organizer :
Chin-Wei Huang, David Krueger, Rianne van den Berg, George Papamakarios, Ricky Chen, Danilo Rezende