Abstract :
[en] Traditional optimisation tools focus on deterministic problems: scheduling airline flight crews (with as few employees as possible while still meeting legal constraints, such as maximum working time), finding the shortest path in a graph (used by navigation systems to give directions), etc.
However, this deterministic hypothesis sometimes provides useless solutions: actual parameters cannot always be known to full precision, one reason being their randomness. For example, when scheduling trucks for freight transportation, if there is unexpected congestion on the roads, the deadlines might not be met, the company might be required to financially compensate for this delay, but also for the following deliveries that could not be made on schedule.
Two main approaches are developed in the literature to take into account this uncertainty: make decision based on probability distributions of the uncertain parameters (stochastic programming) or considering they lie in a so-called ¿uncertainty set¿ (robust programming). In general, the first one leads to a large increase in the size of the problems to solve (and thus requires algorithms to work around this dimensionality curse), while the second is more conservative but tends to change the nature of the programs (which can impose a new solver technology). This talk compares the two approaches on three different cases: facility location, unit-commitment, and reservoir management.
On the implementation side, multiple specific algorithms have been implemented to solve stochastic programs in order to compare their relative performance: Benders¿ decomposition, progressive hedging, and the deterministic equivalent. When comparing stochastic and robust programming, many differences appear in many aspects, even though the literature about those is very scarce. (Furthermore, those two approaches are not incompatible: both can be used in the same optimisation model to take into account different parts of the uncertainty.)
Concerning solving time, stochastic programming quickly gives rise to intractable problems, which requires the development of more specific algorithm just to be able to solve them to an acceptable accuracy in decent time. What is more, the stochastic description of the uncertain values (with a discretisation of the probability distribution through scenarios) must cover all the possible uncertainty, otherwise the solution risks overfitting those scenarios, and is likely to have poor performance on close but different scenarios that may happen in practice ¿ which imposes to use a large number of scenarios, which yields very large (and hardly tractable) optimisation programs.
On the other hand, by using specific uncertainty sets, robust programming yields programs that are only very slightly harder to solve, with an objective function that is very close to that of stochastic programming, but with totally different robustness properties: by using an uncertainty set computed from the scenarios, and not the scenarios themselves, it is able to withstand a much higher uncertainty than stochastic programming. However, when facing other types of uncertainty, this conclusion might turn untrue, with robust programming unable to cope with them and to bring interesting solutions to the table.