Article (Scientific journals)
Distributional Reinforcement Learning with Unconstrained Monotonic Neural Networks
Théate, Thibaut; Wehenkel, Antoine; Bolland, Adrien et al.
2023In Neurocomputing, 534, p. 199-219
Peer Reviewed verified by ORBi
 

Files


Full Text
Preprint.pdf
Author preprint (12.64 MB)
Download
Full Text Parts
Neurocomputing version.pdf
Publisher postprint (3.6 MB)
Download

All documents in ORBi are protected by a user license.

Send to



Details



Keywords :
Artificial Intelligence; Machine Learning; Distributional Reinforcement Learning
Abstract :
[en] The distributional reinforcement learning (RL) approach advocates for representing the complete probability distribution of the random return instead of only modelling its expectation. A distributional RL algorithm may be characterised by two main components, namely the representation of the distribution together with its parameterisation and the probability metric defining the loss. The present research work considers the unconstrained monotonic neural network (UMNN) architecture, a universal approximator of continuous monotonic functions which is particularly well suited for modelling different representations of a distribution. This property enables the efficient decoupling of the effect of the function approximator class from that of the probability metric. The research paper firstly introduces a methodology for learning different representations of the random return distribution (PDF, CDF and QF). Secondly, a novel distributional RL algorithm named unconstrained monotonic deep Q-network (UMDQN) is presented. To the authors’ knowledge, it is the first distributional RL method supporting the learning of three, valid and continuous representations of the random return distribution. Lastly, in light of this new algorithm, an empirical comparison is performed between three probability quasi-metrics, namely the Kullback–Leibler divergence, Cramer distance, and Wasserstein distance. The results highlight the main strengths and weaknesses associated with each probability metric together with an important limitation of the Wasserstein distance.
Research Center/Unit :
Montefiore Institute - Montefiore Institute of Electrical Engineering and Computer Science - ULiège
Disciplines :
Computer science
Author, co-author :
Théate, Thibaut ;  Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Smart grids
Wehenkel, Antoine  ;  Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Big Data
Bolland, Adrien ;  Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Smart grids
Louppe, Gilles  ;  Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Big Data
Ernst, Damien  ;  Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Smart grids
Language :
English
Title :
Distributional Reinforcement Learning with Unconstrained Monotonic Neural Networks
Publication date :
14 May 2023
Journal title :
Neurocomputing
ISSN :
0925-2312
eISSN :
1872-8286
Publisher :
Elsevier, Netherlands
Volume :
534
Pages :
199-219
Peer reviewed :
Peer Reviewed verified by ORBi
Funders :
F.R.S.-FNRS - Fonds de la Recherche Scientifique
Available on ORBi :
since 08 June 2021

Statistics


Number of views
261 (27 by ULiège)
Number of downloads
217 (10 by ULiège)

Scopus citations®
 
4
Scopus citations®
without self-citations
3
OpenCitations
 
0
OpenAlex citations
 
4

Bibliography


Similar publications



Contact ORBi