GECCO '18- Proceedings of the Genetic and Evolutionary Computation Conference

GECCO '18- Proceedings of the Genetic and Evolutionary Computation Conference

Full Citation in the ACM Digital Library

SESSION: General evolutionary computation and hybrids

On the runtime analysis of selection hyper-heuristics with adaptive learning periods

  • Benjamin Doerr
  • Andrei Lissovoi
  • Pietro S. Oliveto
  • John Alasdair Warwicker

Selection hyper-heuristics are randomised optimisation techniques that select from a set of low-level heuristics which one should be applied in the next step of the optimisation process. Recently it has been proven that a Random Gradient hyper-heuristic optimises the LeadingOnes benchmark function in the best runtime achievable with any combination of its low-level heuristics, up to lower order terms. To achieve this runtime, the learning period τ, used to evaluate the performance of the currently chosen heuristic, should be set appropriately, i.e., super-linear in the problem size but not excessively larger. In this paper we automate the hyper-heuristic further by allowing it to self-adjust the learning period τ during the run. To achieve this we equip the algorithm with a simple self-adjusting mechanism, called 1 - o(1) rule, inspired by the 1/5 rule traditionally used in continuous optimisation. We rigorously prove that the resulting hyper-heuristic solves LeadingOnes in optimal runtime by automatically adapting τ and achieving a 1 - o(1) ratio of the desired behaviour. Complementary experiments for realistic problem sizes show the value of τ adapting as desired and that the hyper-heuristic with adaptive learning period outperforms the hyper-heuristic with fixed learning periods.

Sequential sampling for noisy optimisation with CMA-ES

  • Matthew Groves
  • Juergen Branke

This paper proposes a novel sequential sampling scheme to allocate samples to individuals in order to maximally inform the selection step in Covariance Matrix Adaptation Evolution Strategies (CMA-ES) for noisy function optimisation. More specifically we adopt the well-known Knowledge Gradient (KG) method to minimise the Kullback-Leibler divergence (relative entropy) between the distribution used for generating the next offspring population based on the μ selected individuals, and the distribution based on the true μ best individuals that would have been chosen in the absence of noise. Empirical tests demonstrate the benefit of integrating sequential sampling into CMA-ES, and that the proposed KG technique specifically adapted to the needs of CMA-ES indeed outperforms a more straightforward application of KG.

Adaptive asynchrony in semi-asynchronous evolutionary algorithm based on performance prediction using search history

  • Tomohiro Harada

This paper proposes an adaptation technique in an asynchronous evolutionary algorithm (EA) and verifies its effectiveness on multi-objective optimization problems. A parallel EA, which executes EA on a parallel computational environment, can be classified into two approaches, a synchronous EA and an asynchronous EA. A synchronous approach generates new population after all solutions are evaluated, while an asynchronous approach continuously generates a new solution immediately after one solution evaluation completes. Beside this, a semi-asynchronous EA was proposed that can vary the number of waited solution evaluations before generating new solutions, which parameter is called asynchrony. This paper explores a technique to adjust the asynchrony during the optimization process. For this purpose, the proposed method predicts the search performance of EA with different asynchronies from the simulation using search history, and chooses the best asynchrony depending on the predicted performance. To verify the effectiveness of the proposed method, this paper compares these approaches on multi-objective optimization problems. The experimental result reveals that the performance of the proposed method is equal to or better than that of the semi-asynchronous approach with appropriate asynchrony not depending on the variance of the evaluation time of solutions.

Expected improvement of constraint violation for expensive constrained optimization

  • Ruwang Jiao
  • Sanyou Zeng
  • Changhe Li
  • Yuhong Jiang
  • Junchen Wang

For computationally expensive constrained optimization problems, one crucial issue is that the existing expected improvement (EI) criteria are no longer applicable when a feasible point is not initially provided. To address this challenge, this paper uses the expected improvement of constraint violation to reach feasible region. A new constrained expected improvement criterion is proposed to select sample solutions for the update of Gaussian process (GP) surrogate models. The validity of the proposed constrained expected improvement criterion is proved theoretically. It is also verified by experimental studies and results show that it performs better than or competitive to compared criteria.

Talakat: bullet hell generation through constrained map-elites

  • Ahmed Khalifa
  • Scott Lee
  • Andy Nealen
  • Julian Togelius

We describe a search-based approach to generating new levels for bullet hell games, which are action games characterized by and requiring avoidance of a very large amount of projectiles. Levels are represented using a domain-specific description language, and search in the space defined by this language is performed by a novel variant of the Map-Elites algorithm which incorporates a feasible-infeasible approach to constraint satisfaction. Simulation-based evaluation is used to gauge the fitness of levels, using an agent based on best-first search. The performance of the agent can be tuned according to the two dimensions of strategy and dexterity, making it possible to search for level configurations that require a specific combination of both. As far as we know, this paper describes the first generator for this game genre, and includes several algorithmic innovations.

Neural estimation of interaction outcomes

  • Paweł Liskowski
  • Bartosz Wieloch
  • Krzysztof Krawiec

We propose Neural Estimation of Interaction Outcomes (NEIO), a method that reduces the number of required interactions between candidate solutions and tests in test-based problems. Given the outcomes of a random sample of all solution-test interactions, NEIO uses a neural network to predict the outcomes of remaining interactions and so estimate the fitness of programs. We apply NEIO to genetic programming (GP) problems, i.e. test-based problems in which candidate solutions are programs, while tests are examples of the desired input-output program behavior. In an empirical comparison to several reference methods on categorical GP benchmarks, NEIO attains the highest rank on the success rate of synthesizing correct programs.

Termination detection strategies in evolutionary algorithms: a survey

  • Yanfeng Liu
  • Aimin Zhou
  • Hu Zhang

This paper provides an overview of developments on termination conditions in evolutionary algorithms (EAs). It seeks to give a representative picture of the termination conditions in EAs over the past decades, segment the contributions of termination conditions into progress indicators and termination criteria. With respect to progress indicators, we consider a variety of indicators, in particular in convergence indicators and diversity indicators. With respect to termination criteria, this paper reviews recent research on threshold strategy, statistical inference, i.e., Kalman filters, as well as Fuzzy methods, and other methods. Key developments on termination conditions over decades include: (i) methods of judging the algorithm's search behavior based on statistics, and (ii) methods of detecting the termination based on different distance formulations.

Memetic algorithms beat evolutionary algorithms on the class of hurdle problems

  • Phan Trung Hai Nguyen
  • Dirk Sudholt

Memetic algorithms are popular hybrid search heuristics that integrate local search into the search process of an evolutionary algorithm in order to combine the advantages of rapid exploitation and global optimisation. However, these algorithms are not well understood and the field is lacking a solid theoretical foundation that explains when and why memetic algorithms are effective.

We provide a rigorous runtime analysis of a simple memetic algorithm, the (1+1) MA, on the Hurdle problem class, a landscape class of tuneable difficulty that shows a "big valley structure", a characteristic feature of many hard problems from combinatorial optimisation. The only parameter of this class is the hurdle width w, which describes the length of fitness valleys that have to be overcome. We show that the (1+1) EA requires Θ(nw) expected function evaluations to find the optimum, whereas the (1+1) MA with best-improvement and first-improvement local search can find the optimum in Θ(n2 + n3/w2) and Θ(n3/w2) function evaluations, respectively. Surprisingly, while increasing the hurdle width makes the problem harder for evolutionary algorithms, the problem becomes easier for memetic algorithms. We discuss how these findings can explain and illustrate the success of memetic algorithms for problems with big valley structures.

Cooperative co-evolution with online optimizer selection for large-scale optimization

  • Yuan Sun
  • Michael Kirley
  • Xiaodong Li

Cooperative co-evolution (CC) is an effective framework that can be used to solve large-scale optimization problems. It typically divides a problem into components and uses one optimizer to solve the components in a round-robin fashion. However the relative contribution of each component to the overall fitness value may vary. Furthermore, using one optimizer may not be sufficient when solving a wide range of components with different characteristics. In this paper, we propose a novel CC framework which can select an appropriate optimizer to solve a component based on its contribution to the fitness improvement. In each evolutionary cycle, the candidate optimizer and component that make the greatest contribution to the fitness improvement are selected for evolving. We evaluated the efficacy of the proposed CC with Optimizer Selection (CCOS) algorithm using large-scale benchmark problems. The numerical experiments showed that CCOS outperformed the CC model without optimizer selection ability. When compared against several other state-of-the-art algorithms, CCOS generated competitive solution quality.

Quasi-bistability of walk-based landscape measures in stochastic fitness landscapes

  • Bernhard Werth
  • Erik Pitzer
  • Gerald Ostermayer
  • Michael Affenzeller

Exploratory landscape analysis is a useful method for algorithm selection, parametrization and creating an understanding of how a heuristic optimization algorithm performs on a problem and why. A prominent family of fitness landscape analysis measures are based on random walks through the search space. However, most of these features were only introduced on deterministic fitness functions and it is unclear, under which conditions walk-based landscape features are applicable to noisy optimization problems. In this paper, we empirically analyze the effects of noise in the fitness function on these measures and identify two dominant regimes, where either the underlying problem or the noise are described. Additionally, we observe how step sizes and walk lengths of random walks influence this behavior.

Changing or keeping solutions in dynamic optimization problems with switching costs

  • Danial Yazdani
  • Jürgen Branke
  • Mohammad Nabi Omidvar
  • Trung Thanh Nguyen
  • Xin Yao

Dynamic optimization problems (DOPs) are problems that change over time. However, most investigations in this domain are focused on tracking moving optima (TMO) without considering the cost of switching from one solution to another when the environment changes. Robust optimization over time (ROOT) tries to address this shortcoming by finding solutions which remain acceptable for several environments. However, ROOT methods change solutions only when they become unacceptable. Indeed, TMO and ROOT are two extreme cases in the sense that in the former, the switching cost is considered zero and in the latter, it is considered very large. In this paper, we propose a new semi ROOT algorithm based on a new approach to switching cost. This algorithm changes solutions when: 1) the current solution is not acceptable and 2) the current solution is still acceptable but algorithm has found a better solution and switching is preferable despite the cost. The main objective of the proposed algorithm is to maximize the performance based on the fitness of solutions and their switching cost. The experiments are done on modified moving peaks benchmark (mMPB) and the performance of the proposed algorithm alongside state-of-the-art ROOT and TMO methods is investigated.

Working principles of binary differential evolution

  • Weijie Zheng
  • Guangwen Yang
  • Benjamin Doerr

We conduct a first fundamental analysis of the working principles of binary differential evolution (BDE), an optimization heuristic for binary decision variables that was derived by Gong and Tuson (2007) from the very successful classic differential evolution (DE) for continuous optimization. We show that unlike most other optimization paradigms, it is stable in the sense that neutral bit values are sampled with probability close to 1/2. This is generally a desirable property, however, it makes it harder to find the optima for decision variables with small influence on the objective function. This can result in an optimization time exponential in the dimension when optimizing simple symmetric functions like OneMax. On the positive side, BDE quickly detects and optimizes the most important decision variables. For example, dominant bits converge to the optimal value in time logarithmic in the population size. This leads to a very good performance in the situation where the decision variables have a differently strong influence on the result, in particular, when the target is not to find the optimal solution, but only a good one. Overall, our results indicate that BDE is an interesting optimization paradigm having characteristics significantly different from the classic evolutionary algorithms or EDAs.