electricity market modelling; game theory; reinforcement learning
Abstract :
[en] In this paper we compare Nash equilibria analysis and agent-based modelling for assessing the market dynamics of network-constrained pool markets. Power suppliers submit their bids to the market place in order to maximize their payoffs, where we apply reinforcement learning as a behavioral agent model. The market clearing mechanism is based on the locational marginal pricing scheme. Simulations are carried out on a benchmark power system. We show how the evolution of the agent-based approach relates to the existence of a unique Nash equilibrium or multiple equilibria in the system. Additionally, the parameter sensitivity of the results is discussed. (C) 2006 Elsevier Ltd. All rights reserved.
Disciplines :
Electrical & electronics engineering
Author, co-author :
Krause, Thilo
Beck, Elena Vdovina
Cherkaoui, Rachid
Germond, Alain
Andersson, Goran
Ernst, Damien ; Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation
Language :
English
Title :
A comparison of Nash equilibria analysis and agent-based modelling for power markets
Publication date :
November 2006
Journal title :
International Journal of Electrical Power and Energy Systems
Hobbs B., Metzler C., and Pang J.-S. Strategic gaming analysis for electric power systems: an MPEC approach. IEEE Trans Power Syst 15 2 (2000) 638-645
Bunn D.W., and Oliveira F.S. Agent-based simulation - an application to the new electricity trading arrangements of England and Wales. IEEE Trans Evolut Comput 5 5 (2001) 493-503
Krause T, et al. Nash equilibria and reinforcement learning for active decision maker modelling in power markets. In: 6th IAEE Conference - Modelling in Energy Economics, Zürich, 2004.
Fudenberg D., and Tirole J. Game theory (1991), The MIT Press, Cambrigde
von Neumann J., and Morgenstern O. Theory of games and economic behavior (1947), Princeton University Press, Princeton, New Jersey
Minoia A, Ernst D, Dicorato M, Trovato M, Ilic M. Reference transmission network: a game theory approach. In: IEEE Transactions on Power Systems, February 2006, vol.21, p. 249-59.
de la Torre S, Contreras J, Conejo AJ. Finding multiperiod Nash equilibria in pool-based electricity markets. In: IEEE Transactions on Power Systems February 2004, vol. 19(1), p. 643-51.
Haurie A, Krawczyk J. An introduction to dynamic games. Course notes. November 2001, Available Online: .
Porter R, Nudelman E, Shoham Y. Simple search methods for finding a Nash equilibrium. In: Proceedings of the 19th National Conference on Artificial Intellegence, San Jose, CA, 2004.
Kaelbling L.P., Littman M.L., and Moore A.W. Reinforcement learning: a survey. J Artif Intell Res 4 (1996) 237-285
Watkins C. Learning from delayed rewards. PhD dissertation, Cambridge University, Cambridge, England, 1989.
Littman M. Markov games as a framework for multiagent reinforcement learning. Proceedings of the eleventh international conference on machine learning (1994), Morgan Kaufman, San Francisco, CA 157-163
Hu J., and Wellman M. Nash Q-learning for general-sum stochastic games. J Mach Learn Res 4 (2003) 1039-1069