Gym-ANM; reinforcement learning; active network management; distribution networks; renewable energy
Abstract :
[en] Active network management (ANM) of electricity distribution networks include many complex stochastic sequential optimization problems. These problems need to be solved for integrating renewable energies and distributed storage into future electrical grids. In this work, we introduce Gym-ANM, a framework for designing reinforcement learning (RL) environments that model ANM tasks in electricity distribution networks. These environments provide new playgrounds for RL research in the management of electricity networks that do not require an extensive knowledge of the underlying dynamics of such systems. Along with this work, we are releasing an implementation of an introductory toy-environment, ANM6-Easy, designed to emphasize common challenges in ANM. We also show that state-of-the-art RL algorithms can already achieve good performance on ANM6-Easy when compared against a model predictive control (MPC) approach. Finally, we provide guidelines to create new Gym-ANM environments differing in terms of (a) the distribution network topology and parameters, (b) the observation space, (c) the modelling of the stochastic processes present in the system, and (d) a set of hyperparameters influencing the reward signal. Gym-ANM can be downloaded at https://github.com/robinhenry/gym-anm.
Disciplines :
Computer science Energy
Author, co-author :
Henry, Robin
Ernst, Damien ; Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Smart grids
Language :
English
Title :
GYM-ANM: Reinforcement learning environments for active network management tasks in electricity distribution systems
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.
Bibliography
Sutton, R.S., Barto, A.G., Reinforcement learning: an introduction. 2018, MIT press.
Glavic, M., Fonteneau, R., Ernst, D., Reinforcement learning for electric power system decision and control: past considerations and perspectives. IFAC-PapersOnLine 50:1 (2017), 6918–6927 20th IFAC World Congress.
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., et al. Playing atari with deep reinforcement learning. arXiv preprint arXiv:13125602, 2013.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., et al. Human-level control through deep reinforcement learning. Nature 518:7540 (2015), 529–533.
Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587), 2016, 484.
Vinyals, O., Babuschkin, I., Czarnecki, W.M., Mathieu, M., Dudzik, A., Chung, J., Choi, D.H., Powell, R., Ewalds, T., Georgiev, P., et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature 575:7782 (2019), 350–354.
Deisenroth, M.P., Neumann, G., Peters, J., et al. A survey on policy search for robotics. Foundat Trend® Robotic 2:1–2 (2013), 1–142.
Kormushev, P., Calinon, S., Caldwell, D.G., Reinforcement learning in robotics: applications and real-world challenges. Robotics 2:3 (2013), 122–148.
Kober, J., Bagnell, J.A., Peters, J., Reinforcement learning in robotics: a survey. Int J Rob Res 32:11 (2013), 1238–1274.
Gu, S., Holly, E., Lillicrap, T., Levine, S., Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. 2017 IEEE international conference on robotics and automation (ICRA), 2017, IEEE, 3389–3396.
Sallab, A.E., Abdou, M., Perot, E., Yogamani, S., Deep reinforcement learning framework for autonomous driving. Electron Imag 2017:19 (2017), 70–76.
O'Kelly, M., Sinha, A., Namkoong, H., Tedrake, R., Duchi, J.C., Scalable end-to-end autonomous vehicle testing via rare-event simulation. Advances in Neural Information Processing Systems, 2018, 9827–9838.
Li, D., Zhao, D., Zhang, Q., Chen, Y., Reinforcement learning and deep learning based lateral control for autonomous driving [application notes]. IEEE Comput Intell Mag 14:2 (2019), 83–98.
Dulac-Arnold, G., Mankowitz, D., Hester, T., Challenges of real-world reinforcement learning. arXiv preprint arXiv:190412901, 2019.
Fang, X., Misra, S., Xue, G., Yang, D., Smart gridthe new and improved power grid: asurvey. IEEE Commun Surv Tutor 14:4 (2011), 944–980.
Joskow, P.L., Lessons learned from electricity market liberalization. Energy J, 29(Special Issue #2), 2008.
Lasseter, R.H., Microgrids. 2002 IEEE power engineering society winter meeting. Conference proceedings (Cat. No. 02CH37309), 1, 2002, IEEE, 305–308.
Capitanescu, F., Ochoa, L.F., Margossian, H., Hatziargyriou, N.D., Assessing the potential of network reconfiguration to improve distributed generation hosting capacity in active distribution systems. IEEE Trans Power Syst 30:1 (2014), 346–356.
Lutsey, N., Slowik, P., Jin, L., Sustaining electric vehicle market growth in us cities. Int Council Clean Transp (2016), 2016.
Divya, K., Østergaard, J., Battery energy storage technology for power systemsan overview. Electr Power Syst Res 79:4 (2009), 511–520.
Götz, M., Lefebvre, J., Mörs, F., Koch, A.M., Graf, F., Bajohr, S., et al. Renewable power-to-gas: a technological and economic review. Renew Energy 85 (2016), 1371–1390.
Gemine, Q., Ernst, D., Cornélusse, B., Active network management for electrical distribution systems: problem formulation, benchmark, and approximate solution. Opt Eng 18:3 (2017), 587–629.
Gill, S., Kockar, I., Ault, G.W., Dynamic optimal power flow for active distribution networks. IEEE Trans Power Syst 29:1 (2013), 121–131.
Haarnoja, T., Zhou, A., Abbeel, P., Levine, S., Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:180101290, 2018.
Camacho, E.F., Alba, C.B., Model predictive control. 2013, Springer Science & Business Media.
Carpentier, J., Contribution lȨtude du dispatching Ȩconomique. Bulletin de la SociȨtȨ FranȺaise des Electriciens 3:1 (1962), 431–447.
Frank, S., Steponavice, I., Rebennack, S., Optimal power flow: a bibliographic survey I. Energy Syst 3:3 (2012), 221–258.
Frank, S., Steponavice, I., Rebennack, S., Optimal power flow: a bibliographic survey II. Energy Syst 3:3 (2012), 259–289.
Chiang, H.-D., Dobson, I., Thomas, R.J., Thorp, J.S., Fekih-Ahmed, L., On voltage collapse in electric power systems. IEEE Trans Power Syst 5:2 (1990), 601–611.
A. Raffin, A. Hill, M. Ernestus, A. Gleave, A. Kanervisto, N. Dormann, Stable Baselines3, 2019, ( https://github.com/DLR-RM/stable-baselines3).
Zimmerman, R.D., Murillo-Sánchez, C.E., Thomas, R.J., Matpower: steady-state operations, planning, and analysis tools for power systems research and education. IEEE Trans Power Syst 26:1 (2010), 12–19.
Engelhardt, S., Erlich, I., Feltes, C., Kretschmann, J., Shewarega, F., Reactive power capability of wind turbines based on doubly fed induction generators. IEEE Trans Energy Convers 26:1 (2010), 364–372.
Sun, D.I., Ashley, B., Brewer, B., Hughes, A., Tinney, W.F., Optimal power flow by newton approach. IEEE Trans Power Apparatu Syst(10), 1984, 2864–2880.
Diamond, S., Boyd, S., CVXPY: a python-embedded modeling language for convex optimization. J Mach Learn Res 17:83 (2016), 1–5.
Agrawal, A., Verschueren, R., Diamond, S., Boyd, S., A rewriting system for convex optimization problems. J Control Decis 5:1 (2018), 42–60.
Similar publications
Sorry the service is unavailable at the moment. Please try again later.
This website uses cookies to improve user experience. Read more
Save & Close
Accept all
Decline all
Show detailsHide details
Cookie declaration
About cookies
Strictly necessary
Performance
Strictly necessary cookies allow core website functionality such as user login and account management. The website cannot be used properly without strictly necessary cookies.
This cookie is used by Cookie-Script.com service to remember visitor cookie consent preferences. It is necessary for Cookie-Script.com cookie banner to work properly.
Performance cookies are used to see how visitors use the website, eg. analytics cookies. Those cookies cannot be used to directly identify a certain visitor.
Used to store the attribution information, the referrer initially used to visit the website
Cookies are small text files that are placed on your computer by websites that you visit. Websites use cookies to help users navigate efficiently and perform certain functions. Cookies that are required for the website to operate properly are allowed to be set without your permission. All other cookies need to be approved before they can be set in the browser.
You can change your consent to cookie usage at any time on our Privacy Policy page.