[en] Reinforcement learning aims to learn optimal policies from interaction with
environments whose dynamics are unknown. Many methods rely on the approximation
of a value function to derive near-optimal policies. In partially observable
environments, these functions depend on the complete sequence of observations
and past actions, called the history. In this work, we show empirically that
recurrent neural networks trained to approximate such value functions
internally filter the posterior probability distribution of the current state
given the history, called the belief. More precisely, we show that, as a
recurrent neural network learns the Q-function, its hidden states become more
and more correlated with the beliefs of state variables that are relevant to
optimal control. This correlation is measured through their mutual information.
In addition, we show that the expected return of an agent increases with the
ability of its recurrent architecture to reach a high mutual information
between its hidden states and the beliefs. Finally, we show that the mutual
information between the hidden states and the beliefs of variables that are
irrelevant for optimal control decreases through the learning process. In
summary, this work shows that in its hidden states, a recurrent neural network
approximating the Q-function of a partially observable environment reproduces a
sufficient statistic from the history that is correlated to the relevant part
of the belief for taking optimal actions.
Disciplines :
Computer science
Author, co-author :
Lambrechts, Gaspard ; Université de Liège - ULiège > Département d'électricité, électronique et informatique (Institut Montefiore) > Smart grids
Bolland, Adrien ; Université de Liège - ULiège > Département d'électricité, électronique et informatique (Institut Montefiore) > Smart grids
Ernst, Damien ; Université de Liège - ULiège > Département d'électricité, électronique et informatique (Institut Montefiore) > Smart grids
Language :
English
Title :
Belief states of POMDPs and internal states of recurrent RL agents: an empirical analysis of their mutual information
This website uses cookies to improve user experience. Read more
Save & Close
Accept all
Decline all
Show detailsHide details
Cookie declaration
About cookies
Strictly necessary
Performance
Strictly necessary cookies allow core website functionality such as user login and account management. The website cannot be used properly without strictly necessary cookies.
This cookie is used by Cookie-Script.com service to remember visitor cookie consent preferences. It is necessary for Cookie-Script.com cookie banner to work properly.
Performance cookies are used to see how visitors use the website, eg. analytics cookies. Those cookies cannot be used to directly identify a certain visitor.
Used to store the attribution information, the referrer initially used to visit the website
Cookies are small text files that are placed on your computer by websites that you visit. Websites use cookies to help users navigate efficiently and perform certain functions. Cookies that are required for the website to operate properly are allowed to be set without your permission. All other cookies need to be approved before they can be set in the browser.
You can change your consent to cookie usage at any time on our Privacy Policy page.