[en] Reinforcement learning and its extension with deep learning have led to a field of research called deep reinforcement learning.
Applications of that research have recently shown the possibility to solve complex decision-making tasks that were previously believed extremely difficult for a computer.
Yet, deep reinforcement learning requires caution and understanding of its inner mechanisms in order to be applied successfully in the different settings.
As an introduction, we provide a general overview of the field of deep reinforcement learning. The thesis is then divided in two parts.
In the first part, we provide an analysis of reinforcement learning in the particular setting of a limited amount of data and in the general context of partial observability. In this setting, we focus on the tradeoff between asymptotic bias (suboptimality with unlimited data) and overfitting (additional suboptimality due to limited data), and theoretically show that while potentially increasing the asymptotic bias, a smaller state representation decreases the risk of overfitting.
An original theoretical contribution relies on expressing the quality of a state representation by bounding $L_1$ error terms of the associated belief states.
We also discuss and empirically illustrate the role of other parameters to optimize the bias-overfitting tradeoff: the function approximator (in particular deep learning) and the discount factor.
In addition, we investigate the specific case of the discount factor in the deep reinforcement learning setting case where additional data can be gathered through learning.
In the second part of this thesis, we focus on a smartgrids application that falls in the context of a partially observable problem and where a limited amount of data is available (as studied in the first part of the thesis).
We consider the case of microgrids featuring photovoltaic panels (PV) associated with both long-term (hydrogen) and short-term (batteries) storage devices. We propose a novel formalization of the problem of building and operating microgrids interacting with their surrounding environment. In the deterministic assumption, we show how to optimally operate and size microgrids using linear programming techniques.
We then show how to use deep reinforcement learning to solve the operation of microgrids under uncertainty where, at every time-step, the uncertainty comes from the lack of knowledge about future electricity consumption and weather dependent PV production.
Disciplines :
Computer science
Author, co-author :
François-Lavet, Vincent ; Université de Liège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Dép. d'électric., électron. et informat. (Inst.Montefiore)
Language :
English
Title :
Contributions to deep reinforcement learning and its applications in smartgrids
Defense date :
11 September 2017
Number of pages :
177
Institution :
ULiège - Université de Liège
Degree :
Doctor of Philosophy in Computer Science
Promotor :
Ernst, Damien ; Université de Liège - ULiège > Montefiore Institute of Electrical Engineering and Computer Science
Fonteneau, Raphaël ; Université de Liège - ULiège > Montefiore Institute of Electrical Engineering and Computer Science
President :
Wehenkel, Louis ; Université de Liège - ULiège > Montefiore Institute of Electrical Engineering and Computer Science