Abstract :
[en] A large number of problems can be formalized as finding the best symbolic expression to maximize a given numerical objective. Most approaches to approximately solve such problems rely on random exploration of the search space. This paper focuses on how this random exploration should be performed to take into account expressions redundancy and invalid expressions. We propose a learning algorithm that, given the set of available constants, variables and operators and given the target finite number of trials, computes a probability distribution to maximize the expected number of semantically different, valid, generated expressions. We illustrate the use of our approach on both medium-scale and large-scale expression spaces, and empirically show that such optimized distributions significantly outperform the uniform distribution in terms of the diversity of generated expressions. We further test the method in combination with the recently proposed nested Monte-Carlo algorithm on a set of benchmark symbolic regression problems and demonstrate its interest in terms of reduction of the number of required calls to the objective function.
Scopus citations®
without self-citations
1