Loading...
 

TOC GECCO 2019 GECH

GECCO '19- Proceedings of the Genetic and Evolutionary Computation Conference

GECCO '19- Proceedings of the Genetic and Evolutionary Computation Conference

Full Citation in the ACM Digital Library

SESSION: General evolutionary computation and hybrids

Adaptive objective selection for multi-fidelity optimization

  • Youhei Akimoto
  • Takuma Shimizu
  • Takahiro Yamaguchi

In simulation-based optimization we often have access to multiple simulators or surrogate models that approximate a computationally expensive or intractable objective function with different tradeoffs between the fidelity and computational time. Such a setting is called multi-fidelity optimization. In this paper, we propose a novel strategy to adaptively select which simulator to use during optimization of comparison-based evolutionary algorithms. Our adaptive switching strategy works as a wrapper of multiple simulators: optimization algorithms optimize the wrapper function and the adaptive switching strategy selects a simulator inside the wrapper, implying wide applicability of the proposed approach. We empirically investigate how efficiently the adaptive switching strategy manages simulator selection on test problems and theoretically investigate how it changes the fidelity level during optimization.

Hyper-parameter tuning for the (1 + (λ, λ)) GA

  • Nguyen Dang
  • Carola Doerr

It is known that the (1 + (λ, λ)) Genetic Algorithm (GA) with self-adjusting parameter choices achieves a linear expected optimization time on OneMax if its hyper-parameters are suitably chosen. However, it is not very well understood how the hyper-parameter settings influences the overall performance of the (1 + (λ, λ)) GA. Analyzing such multi-dimensional dependencies precisely is at the edge of what running time analysis can offer. To make a step forward on this question, we present an in-depth empirical study of the self-adjusting (1 + (λ, λ)) GA and its hyper-parameters. We show, among many other results, that a 15% reduction of the average running time is possible by a slightly different setup, which allows non-identical offspring population sizes of mutation and crossover phase, and more flexibility in the choice of mutation rate and crossover bias --- a generalization which may be of independent interest. We also show indication that the parametrization of mutation rate and crossover bias derived by theoretical means for the static variant of the (1 + (λ, λ)) GA extends to the non-static case.

Scenario co-evolution for reinforcement learning on a grid world smart factory domain

  • Thomas Gabor
  • Andreas Sedlmeier
  • Marie Kiermeier
  • Thomy Phan
  • Marcel Henrich
  • Monika Pichlmair
  • Bernhard Kempter
  • Cornel Klein
  • Horst Sauer
  • Reiner SchmidSiemens AG
  • Jan Wieghardt

Adversarial learning has been established as a successful paradigm in reinforcement learning. We propose a hybrid adversarial learner where a reinforcement learning agent tries to solve a problem while an evolutionary algorithm tries to find problem instances that are hard to solve for the current expertise of the agent, causing the intelligent agent to co-evolve with a set of test instances or scenarios. We apply this setup, called scenario co-evolution, to a simulated smart factory problem that combines task scheduling with navigation of a grid world. We show that the so trained agent outperforms conventional reinforcement learning. We also show that the scenarios evolved this way can provide useful test cases for the evaluation of any (however trained) agent.

On the impact of the cutoff time on the performance of algorithm configurators

  • George T. Hall
  • Pietro S. Oliveto
  • Dirk Sudholt

Algorithm configurators are automated methods to optimise the parameters of an algorithm for a class of problems. We evaluate the performance of a simple random local search configurator (ParamRLS) for tuning the neighbourhood size k of the RLSk algorithm. We measure performance as the expected number of configuration evaluations required to identify the optimal value for the parameter. We analyse the impact of the cutoff time κ (the time spent evaluating a configuration for a problem instance) on the expected number of configuration evaluations required to find the optimal parameter value, where we compare configurations using either best found fitness values (ParamRLS-F) or optimisation times (ParamRLS-T). We consider tuning RLSk for a variant of the Ridge function class (Ridge*), where the performance of each parameter value does not change during the run, and for the OneMax function class, where longer runs favour smaller k. We rigorously prove that ParamRLS-F efficiently tunes RLSk for Ridge* for any κ while ParamRLS-T requires at least quadratic κ. For OneMax ParamRLS-F identifies k = 1 as optimal with linear κ while ParamRLS-T requires a κ of at least ω(n log n). For smaller κ ParamRLS-F identifies that k > 1 performs better while ParamRLS-T returns k chosen uniformly at random.

Surrogates for hierarchical search spaces: the wedge-kernel and an automated analysis

  • Daniel Horn
  • Jörg Stork
  • Nils-Jannik Schüßler
  • Martin Zaefferer

Optimization in hierarchical search spaces deals with variables that only have an influence on the objective function if other variables fulfill certain conditions. These types of conditions complicate the optimization process. If the objective function is expensive to evaluate, these complications are further compounded. Especially, if surrogate models are learned to replace the objective function, they have to be able to respect the hierarchical dependencies in the data. In this work a new kernel is introduced, that allows Kriging models (Gaussian process regression) to handle hierarchical search spaces. This new kernel is compared to existing kernels based on an artificial benchmark function. Thereby, we face a typical algorithm design problem: The statistical analysis of benchmark results. Instead of just identifying the best algorithm, it is often desirable to compute algorithm rankings that depend on additional experimental parameters. We propose a method for the automated analysis of such algorithm comparisons. Instead of investigating all parameter constellations of our artificial test function at once, we apply a cluster algorithm and analyze rankings of the algorithms within each cluster. This new method is used to analyze the above mentioned benchmark of kernels for hierarchical search spaces.

Meta-learning on flowshop using fitness landscape analysis

  • Lucas Marcondes Pavelski
  • Myriam Regattieri Delgado
  • Marie-Éléonore Kessaci

In the context of recommendation methods, meta-learning considers the use of previous knowledge regarding problems solution and performance to indicate the best strategy, whenever it faces a new similar problem. This paper studies the use of meta-learning to recommend local search strategies to solve several instances of permutation flowshop problems. The method proposed to conceive the meta-learning model is described considering three main phases: (i) extracting the problem features, (ii) building the performance database and (iii) training the recommendation model. In this work, we extract instances features mainly through fitness landscape analysis; build the performance data using the Irace parameter tuning algorithm and train neural networks models for recommendation. The paper also analyzes two mechanisms that support the recommendation: one using classification as its basis and another considering ranking processes. Experiments conducted on a wide range of different flowshop instances show that it is possible to recommend not only the best algorithms, but also some of their suitable configurations.

Surrogate models for enhancing the efficiency of neuroevolution in reinforcement learning

  • Jörg Stork
  • Martin Zaefferer
  • Thomas Bartz-Beielstein
  • A. E. Eiben

In the last years, reinforcement learning received a lot of attention. One method to solve reinforcement learning tasks is Neuroevolution, where neural networks are optimized by evolutionary algorithms. A disadvantage of Neuroevolution is that it can require numerous function evaluations, while not fully utilizing the available information from each fitness evaluation. This is especially problematic when fitness evaluations become expensive. To reduce the cost of fitness evaluations, surrogate models can be employed to partially replace the fitness function. The difficulty of surrogate modeling for Neuroevolution is the complex search space and how to compare different networks. To that end, recent studies showed that a kernel based approach, particular with phenotypic distance measures, works well. These kernels compare different networks via their behavior (phenotype) rather than their topology or encoding (genotype). In this work, we discuss the use of surrogate model-based Neuroevolution (SMB-NE) using a phenotypic distance for reinforcement learning. In detail, we investigate a) the potential of SMB-NE with respect to evaluation efficiency and b) how to select adequate input sets for the phenotypic distance measure in a reinforcement learning problem. The results indicate that we are able to considerably increase the evaluation efficiency using dynamic input sets.

Algorithm portfolio for individual-based surrogate-assisted evolutionary algorithms

  • Hao Tong
  • Jialin Liu
  • Xin Yao

Surrogate-assisted evolutionary algorithms (SAEAs) are powerful optimisation tools for computationally expensive problems (CEPs). However, a randomly selected algorithm may fail in solving unknown problems due to no free lunch theorems, and it will cause more computational resource if we re-run the algorithm or try other algorithms to get a much solution, which is more serious in CEPs. In this paper, we consider an algorithm portfolio for SAEAs to reduce the risk of choosing an inappropriate algorithm for CEPs. We propose two portfolio frameworks for very expensive problems in which the maximal number of fitness evaluations is only 5 times of the problem's dimension. One framework named Par-IBSAEA runs all algorithm candidates in parallel and a more sophisticated framework named UCB-IBSAEA employs the Upper Confidence Bound (UCB) policy from reinforcement learning to help select the most appropriate algorithm at each iteration. An effective reward definition is proposed for the UCB policy. We consider three state-of-the-art individual-based SAEAs on different problems and compare them to the portfolios built from their instances on several benchmark problems given limited computation budgets. Our experimental studies demonstrate that our proposed portfolio frameworks significantly outperform any single algorithm on the set of benchmark problems.

Online selection of CMA-ES variants

  • Diederick Vermetten
  • Sander van Rijn
  • Thomas Bäck
  • Carola Doerr

In the field of evolutionary computation, one of the most challenging topics is algorithm selection. Knowing which heuristics to use for which optimization problem is key to obtaining high-quality solutions. We aim to extend this research topic by taking a first step towards a selection method for adaptive CMA-ES algorithms. We build upon the theoretical work done by van Rijn et al. [PPSN'18], in which the potential of switching between different CMA-ES variants was quantified in the context of a modular CMA-ES framework.

We demonstrate in this work that their proposed approach is not very reliable, in that implementing the suggested adaptive configurations does not yield the predicted performance gains. We propose a revised approach, which results in a more robust fit between predicted and actual performance. The adaptive CMA-ES approach obtains performance gains on 18 out of 24 tested functions of the BBOB benchmark, with stable advantages of up to 23%. An analysis of module activation indicates which modules are most crucial for the different phases of optimizing each of the 24 benchmark problems. The module activation also suggests that additional gains are possible when including the (B)IPOP modules, which we have excluded for this present work.

Using subpopulation EAs to map molecular structure landscapes

  • Ahmed Bin Zaman
  • Kenneth De Jong
  • Amarda Shehu

The emerging view in molecular biology is that molecules are intrinsically dynamic systems rearranging themselves into different structures to interact with molecules in the cell. Such rearrangements take place on energy landscapes that are vast and multimodal, with minima housing alternative structures. The multiplicity of biologically-active structures is prompting researchers to expand their treatment of classic computational biology problems, such as the template-free protein structure prediction problem (PSP), beyond the quest for the global optimum. In this paper, we revisit subpopulation-oriented EAs as vehicles to switch the objective from classic optimization to landscape mapping. Specifically, we present two EAs, one of which makes use of subpopulation competition to allocate more computational resources to fitter subpopulations, and another of which additionally utilizes a niche preservation technique to maintain stable and diverse subpopulations. Initial assessment on benchmark optimization problems confirms that stabler subpopulations are achieved by the niche-preserving EA. Evaluation on unknown energy landscapes in the context of PSP demonstrates superior mapping performance by both algorithms over a popular Monte Carlo-based method, with the niche-preserving EA achieving superior exploration of lower-energy regions. These results suggest that subpopulation EAs hold much promise for solving important mapping problems in computational structural biology.