reinforcement learning; gym; distribution network; active network management
Résumé :
[en] Gym-ANM is a Python package that facilitates the design of reinforcement learning (RL) environments that model active network management (ANM) tasks in electricity networks. Here, we describe how to implement new environments and how to write code to interact with pre-existing ones. We also provide an overview of ANM6-Easy, an environment designed to highlight common ANM challenges. Finally, we discuss the potential impact of Gym-ANM on the scientific community, both in terms of research and education. We hope this package will facilitate collaboration between the power system and RL communities in the search for algorithms to control future energy systems.
Disciplines :
Energie Sciences informatiques Ingénierie électrique & électronique
Auteur, co-auteur :
Henry, Robin
Ernst, Damien ; Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Smart grids
Langue du document :
Anglais
Titre :
Gym-ANM: Open-source software to leverage reinforcement learning for power system management in research and education
Gill, S., Kockar, I., Ault, G.W., Dynamic optimal power flow for active distribution networks. IEEE Trans. Power Syst. 29:1 (2013), 121–131.
McDonald, J., Adaptive intelligent power systems: Active distribution networks. Energy Policy 36:12 (2008), 4346–4351.
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M., Playing atari with deep reinforcement learning. 2013 arXiv preprint arXiv:1312.5602.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., et al. Human-level control through deep reinforcement learning. Nature 518:7540 (2015), 529–533.
Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587), 2016, 484.
Vinyals, O., Babuschkin, I., Czarnecki, W.M., Mathieu, M., Dudzik, A., Chung, J., Choi, D.H., Powell, R., Ewalds, T., Georgiev, P., et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575:7782 (2019), 350–354.
Deisenroth, M.P., Neumann, G., Peters, J., et al. A survey on policy search for robotics. Found. Trend. Robot. 2:1–2 (2013), 1–142.
Kormushev, P., Calinon, S., Caldwell, D.G., Reinforcement learning in robotics: Applications and real-world challenges. Robotics 2:3 (2013), 122–148.
Kober, J., Bagnell, J.A., Peters, J., Reinforcement learning in robotics: A survey. Int. J. Robot. Res. 32:11 (2013), 1238–1274.
Gu, S., Holly, E., Lillicrap, T., Levine, S., Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017, IEEE, 3389–3396.
Sallab, A.E., Abdou, M., Perot, E., Yogamani, S., Deep reinforcement learning framework for autonomous driving. Electron. Imaging 2017:19 (2017), 70–76.
O'Kelly, M., Sinha, A., Namkoong, H., Tedrake, R., Duchi, J.C., Scalable end-to-end autonomous vehicle testing via rare-event simulation. Advances in Neural Information Processing Systems, 2018, 9827–9838.
Li, D., Zhao, D., Zhang, Q., Chen, Y., Reinforcement learning and deep learning based lateral control for autonomous driving [application notes]. IEEE Comput. Intell. Magaz. 14:2 (2019), 83–98.
Henry, R., Ernst, D., Gym-ANM: Reinforcement learning environments for active network management tasks in electricity distribution systems. 2021 arXiv preprint arXiv:2103.07932.
Zimmerman, R.D., Murillo-Sánchez, C.E., Thomas, R.J., MATPOWER: Steady-state operations, planning, and analysis tools for power systems research and education. IEEE Trans. Power Syst. 26:1 (2010), 12–19.
Thurner, L., Scheidler, A., Schäfer, F., Menke, J., Dollichon, J., Meier, F., Meinecke, S., Braun, M., Pandapower — An open-source python tool for convenient modeling, analysis, and optimization of electric power systems. IEEE Trans. Power Syst. 33:6 (2018), 6510–6521, 10.1109/TPWRS.2018.2829021.
Gonzalez-Longatt, F.M., Rueda, J.L., PowerFactory Applications for Power System Analysis. 2014, Springer.
Langley, H., Wright, K., ERACS-a comprehensive package for PCs. IEE Colloquium on Interactive Graphic Power System Analysis Programs, 1992, IET p. 3/1-3/7.
Brown, K., Shokooh, F., Abcede, H., Donner, G., Interactive simulation of power systems: ETAP applications and techniques. Conference Record of the 1990 IEEE Industry Applications Society Annual Meeting, 1990, IEEE, 1930–1941.
TNEI, Interactive Power System Analysis (IPSA) software, https://www.ipsa-power.com, (Accessed on 05/12/2021).
PowerWorld Corporation, PowerWorld software, https://www.powerworld.com, (Accessed on 05/12/2021).
Milano, F., An open source power system analysis toolbox. IEEE Trans. Power Syst. 20:3 (2005), 1199–1206.
R. Lincoln, PYPOWER library, https://github.com/rwl/PYPOWER, (Accessed on 05/12/2021).
D. Ernst, Optimal decision making for complex problems course at the University of Liège, http://blogs.ulg.ac.be/damien-ernst/info8003-1-optimal-decision-making-for-complex-problems, (Accessed on 05/13/2021).