[en] Diffusion models recently proved to be remarkable priors for Bayesian inverse
problems. However, training these models typically requires access to large
amounts of clean data, which could prove difficult in some settings. In this
work, we present a novel method based on the expectation-maximization algorithm
for training diffusion models from incomplete and noisy observations only.
Unlike previous works, our method leads to proper diffusion models, which is
crucial for downstream tasks. As part of our method, we propose and motivate a
new posterior sampling scheme for unconditional diffusion models. We present
empirical evidence supporting the effectiveness of our method.
Disciplines :
Computer science
Author, co-author :
Rozet, François ; Université de Liège - ULiège > Département d'électricité, électronique et informatique (Institut Montefiore) > Big Data
Andry, Gérôme ; Université de Liège - ULiège > Montefiore Institute of Electrical Engineering and Computer Science
Lanusse, François
Louppe, Gilles ; Université de Liège - ULiège > Département d'électricité, électronique et informatique (Institut Montefiore) > Big Data
Language :
English
Title :
Learning Diffusion Priors from Observations by Expectation Maximization
Publication date :
22 May 2024
Event name :
Advances in Neural Information Processing Systems 38
Event place :
Vancouver, Canada
Event date :
December 10-15, 2024
Audience :
International
Journal title :
Advances in Neural Information Processing Systems
ISSN :
1049-5258
Publisher :
Curran Associates, United States
Peer reviewed :
Peer Reviewed verified by ORBi
Tags :
CÉCI : Consortium des Équipements de Calcul Intensif Tier-1 supercomputer