Reference : Contributions to Batch Mode Reinforcement Learning |

Dissertations and theses : Doctoral thesis | |||

Engineering, computing & technology : Computer science | |||

http://hdl.handle.net/2268/85194 | |||

Contributions to Batch Mode Reinforcement Learning | |

English | |

Fonteneau, Raphaël [Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation >] | |

24-Feb-2011 | |

Université de Liège, Liège, Belgium | |

Doctorat en Sciences de l'Ingénieur | |

Ernst, Damien | |

Wehenkel, Louis | |

Louveaux, Quentin | |

Sepulchre, Rodolphe | |

Munos, Remi | |

Murphy, Susan | |

Sebag, Michèle | |

[en] Reinforcement Learning ; Machine Learning ; Optimal Control ; Artificial Intelligence | |

[en] This dissertation presents various research contributions published during these four years of PhD in the ﬁeld of batch mode reinforcement learning, which studies optimal control problems for which the only information available on the system dynamics and the reward function is gathered in a set of trajectories.
We ﬁrst focus on deterministic problems in continuous spaces. In such a context, and under some assumptions related to the smoothness of the environment, we propose a new approach for inferring bounds on the performance of control policies. We also derive from these bounds a new inference algorithm for generalizing the information contained in the batch collection of trajectories in a cautious manner. This inference algorithm as itself lead us to propose a min max generalization framework. When working on batch mode reinforcement learning problems, one has also often to consider the problem of generating informative trajectories. This dissertation proposes two different approaches for addressing this problem. The ﬁrst approach uses the bounds mentioned above to generate data tightening these bounds. The second approach proposes to generate data that are predicted to generate a change in the inferred optimal control policy. While the above mentioned contributions consider a deterministic framework, we also report on two research contributions which consider a stochastic setting. The ﬁrst one addresses the problem of evaluating the expected return of control policies in the presence of disturbances. The second one proposes a technique for selecting relevant variables in a batch mode reinforcement learning context, in order to compute simplified control policies that are based on smaller sets of state variables. | |

Fonds pour la formation à la Recherche dans l'Industrie et dans l'Agriculture (Communauté française de Belgique) - FRIA | |

Researchers | |

http://hdl.handle.net/2268/85194 |

File(s) associated to this reference | ||||||||||||||

| ||||||||||||||

All documents in ORBi are protected by a user license.