Publications and communications of Adrien Bolland

Bolland, A. (19 February 2024). Presentation: Behind the Myth of Exploration in Policy Gradients [Paper presentation]. Machine Learning and AI academy.

Aittahar, S., Bolland, A., Derval, G., & Ernst, D. (2024). Optimal control of renewable energy communities subject to network peak fees with model predictive control and reinforcement learning algorithms. ORBi-University of Liège. https://orbi.uliege.be/handle/2268/312776.

Bolland, A. (11 January 2024). Understanding the influence of exploration on the dynamics of policy-gradient algorithms [Paper presentation]. Presentation at the Mathematical Institute of the University of Mannheim, Mannheim, Germany.

Bolland, A. (2024). Introduction to Gradient-Based Direct Policy Search. (ULiège - University of Liège [FACSA], Liège, Belgium).

Bolland, A. (2024). Advanced Policy-Gradient Algorithms. (ULiège - University of Liège [FACSA], Liège, Belgium).

Bolland, A., Lambrechts, G., & Ernst, D. (2024). Behind the Myth of Exploration in Policy Gradients. ORBi-University of Liège. https://orbi.uliege.be/handle/2268/312658.

Bolland, A., Louppe, G., & Ernst, D. (2023). Policy Gradient Algorithms Implicitly Optimize by Continuation. Transactions on Machine Learning Research.

Bolland, A., Louppe, G., & Ernst, D. (19 June 2023). Policy Gradient Algorithms Implicitly Optimize by Continuation [Poster presentation]. ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems, Honolulu, United States - Hawaii.

Lambrechts, G., Bolland, A., & Ernst, D. (June 2023). Informed POMDP: Leveraging Additional Information in Model-Based RL [Paper presentation]. ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems, Honolulu, United States - Hawaii.

Théate, T., Wehenkel, A., Bolland, A., Louppe, G., & Ernst, D. (14 May 2023). Distributional Reinforcement Learning with Unconstrained Monotonic Neural Networks. Neurocomputing, 534, 199-219. doi:10.1016/j.neucom.2023.02.049

Lambrechts, G., Bolland, A., & Ernst, D. (September 2022). Belief states of POMDPs and internal states of recurrent RL agents: an empirical analysis of their mutual information [Paper presentation]. European Workshop on Reinforcement Learning, Milan, Italy.

Lambrechts, G., Bolland, A., & Ernst, D. (2022). Recurrent networks, hidden states and beliefs in partially observable environments. Transactions on Machine Learning Research.

Bolland, A., Boukas, I., Berger, M., & Ernst, D. (January 2022). Jointly Learning Environments and Control Policies with Projected Stochastic Gradient Ascent. Journal of Artificial Intelligence Research, 73, 117-171. doi:10.1613/JAIR.1.13350

Boukas, I., Ernst, D., Théate, T., Bolland, A., Huynen, A., Buchwald, M., Wynants, C., & Cornélusse, B. (2021). A Deep Reinforcement Learning Framework for Continuous Intraday Market Bidding. Machine Learning. doi:10.1007/s10994-021-06020-8

Berger, M., Bolland, A., Miftari, B., Djelassi, H., & Ernst, D. (2021). Graph-Based Optimization Modeling Language: A Tutorial. ORBi-University of Liège. https://orbi.uliege.be/handle/2268/256705.