Loading...
 

TOC GECCO 2020 THEORY

GECCO '20: Proceedings of the 2020 Genetic and Evolutionary Computation Conference

GECCO '20: Proceedings of the 2020 Genetic and Evolutionary Computation Conference

Full Citation in the ACM Digital Library

SESSION: Theory

The (1 + (λ,λ)) GA is even faster on multimodal problems

  • Denis Antipov
  • Benjamin Doerr
  • Vitalii Karavaev

For the (1 + (λ, λ)) genetic algorithm rigorous runtime analyses on unimodal fitness functions have shown that it can be faster than classical evolutionary algorithms, though on these simple problems the gains are only moderate.

In this work, we conduct the first runtime analysis of this algorithm on a multimodal problem class, the jump functions benchmark. We show that with the right parameters, the (1 + (λ, λ)) GA optimizes any jump function with jump size 2 ≤ kn/16 in expected time O(n(k+1)/2 eO(k) k-k/2), which significantly and already for constant k outperforms standard mutation-based algorithms with their Θ(nk) runtime and standard crossover-based algorithms with their O(nk-1) runtime.

Our work also suggests some general advice on how to set the parameters of the (1 + (λ,λ)) GA, which might ease the further use of this algorithm.

Fast mutation in crossover-based algorithms

  • Denis Antipov
  • Maxim Buzdalov
  • Benjamin Doerr

The heavy-tailed mutation operator proposed in Doerr et al. (GECCO 2017), called fast mutation to agree with the previously used language, so far was successfully used only in purely mutation-based algorithms. There, it can relieve the algorithm designer from finding the optimal mutation rate and nevertheless obtain a performance close to the one that the optimal mutation rate gives.

In this first runtime analysis of a crossover-based algorithm using a heavy-tailed choice of the mutation rate, we show an even stronger impact. With a heavy-tailed mutation rate, the runtime of the (1 + (λ, λ)) genetic algorithm on the OneMax benchmark function becomes linear in the problem size. This is asymptotically faster than with any static mutation rate and is the same asymptotic runtime that can be obtained with a self-adjusting choice of the mutation rate. This result is complemented by an empirical study which shows the effectiveness of the fast mutation also on random MAX-3SAT instances.

More effective randomized search heuristics for graph coloring through dynamic optimization

  • Jakob Bossek
  • Frank Neumann
  • Pan Peng
  • Dirk Sudholt

Dynamic optimization problems have gained significant attention in evolutionary computation as evolutionary algorithms (EAs) can easily adapt to changing environments. We show that EAs can solve the graph coloring problem for bipartite graphs more efficiently by using dynamic optimization. In our approach the graph instance is given incrementally such that the EA can reoptimize its coloring when a new edge introduces a conflict. We show that, when edges are inserted in a way that preserves graph connectivity, Randomized Local Search (RLS) efficiently finds a proper 2-coloring for all bipartite graphs. This includes graphs for which RLS and other EAs need exponential expected time in a static optimization scenario. We investigate different ways of building up the graph by popular graph traversals such as breadth-first-search and depth-first-search and analyse the resulting runtime behavior. We further show that offspring populations (e. g. a (1 + λ) RLS) lead to an exponential speedup in λ. Finally, an island model using 3 islands succeeds in an optimal time of Θ(m) on every m-edge bipartite graph, outperforming offspring populations. This is the first example where an island model guarantees a speedup that is not bounded in the number of islands.

The node weight dependent traveling salesperson problem: approximation algorithms and randomized search heuristics

  • Jakob Bossek
  • Katrin Casel
  • Pascal Kerschke
  • Frank Neumann

Several important optimization problems in the area of vehicle routing can be seen as variants of the classical Traveling Salesperson Problem (TSP). In the area of evolutionary computation, the Traveling Thief Problem (TTP) has gained increasing interest over the last 5 years. In this paper, we investigate the effect of weights on such problems, in the sense that the cost of traveling increases with respect to the weights of nodes already visited during a tour. This provides abstractions of important TSP variants such as the Traveling Thief Problem and time dependent TSP variants, and allows to study precisely the increase in difficulty caused by weight dependence. We provide a 3.59-approximation for this weight dependent version of TSP with metric distances and bounded positive weights. Furthermore, we conduct experimental investigations for simple randomized local search with classical mutation operators and two variants of the state-of-the-art evolutionary algorithm EAX adapted to the weighted TSP. Our results show the impact of the node weights on the position of the nodes in the resulting tour.

Fixed-target runtime analysis

  • Maxim Buzdalov
  • Benjamin Doerr
  • Carola Doerr
  • Dmitry Vinokurov

Runtime analysis aims at contributing to our understanding of evolutionary algorithms through mathematical analyses of their runtimes. In the context of discrete optimization problems, runtime analysis classically studies the time needed to find an optimal solution. However, both from a practical and a theoretical viewpoint, more fine-grained performance measures are needed. Two complementary approaches have been suggested: fixed-budget analysis and fixed-target analysis.

In this work, we conduct an in-depth study on the advantages and limitations of fixed-target analyses. We show that, different from fixed-budget analyses, many classical methods from the runtime analysis of discrete evolutionary algorithms yield fixed-target results without greater effort. We use this to conduct a number of new fixed-target analyses. However, we also point out examples where an extension of the existing runtime result to a fixed-target result is highly non-trivial.

Does comma selection help to cope with local optima?

  • Benjamin Doerr

One hope of using non-elitism in evolutionary computation is that it aids leaving local optima. We perform a rigorous runtime analysis of a basic non-elitist evolutionary algorithm (EA), the (μ, λ) EA, on the most basic benchmark function with a local optimum, the jump function. We prove that for all reasonable values of the parameters and the problem, the expected runtime of the (μ, λ) EA is, apart from lower order terms, at least as large as the expected runtime of its elitist counterpart, the (μ + λ) EA (for which we conduct the first runtime analysis to allow this comparison). Consequently, the ability of the (μ, λ) EA to leave local optima to inferior solutions does not lead to a runtime advantage.

We complement this lower bound with an upper bound that, for broad ranges of the parameters, is identical to our lower bound apart from lower order terms. This is the first runtime result for a non-elitist algorithm on a multi-modal problem that is tight apart from lower order terms.

Self-adjusting evolutionary algorithms for multimodal optimization

  • Amirhossein Rajabi
  • Carsten Witt

Recent theoretical research has shown that self-adjusting and self-adaptive mechanisms can provably outperform static settings in evolutionary algorithms for binary search spaces. However, the vast majority of these studies focuses on unimodal functions which do not require the algorithm to flip several bits simultaneously to make progress. In fact, existing self-adjusting algorithms are not designed to detect local optima and do not have any obvious benefit to cross large Hamming gaps.

We suggest a mechanism called stagnation detection that can be added as a module to existing evolutionary algorithms (both with and without prior self-adjusting schemes). Added to a simple (1 + 1) EA, we prove an expected runtime on the well-known Jump benchmark that corresponds to an asymptotically optimal parameter setting and outperforms other mechanisms for multimodal optimization like heavy-tailed mutation. We also investigate the module in the context of a self-adjusting (1 + λ) EA and show that it combines the previous benefits of this algorithm on unimodal problems with more efficient multimodal optimization. To explore the limitations of the approach, we additionally present an example where both self-adjusting mechanisms, including stagnation detection, do not help to find a beneficial setting of the mutation rate. Finally, we investigate our module for stagnation detection experimentally.

A tight lower bound on the expected runtime of standard steady state genetic algorithms

  • Pietro S. Oliveto
  • Dirk Sudholt
  • Carsten Witt

Recent progress in the runtime analysis of evolutionary algorithms (EAs) has allowed the derivation of upper bounds on the expected runtime of standard steady-state GAs. These upper bounds have shown speed-ups of the GAs using crossover and mutation over the same algorithms that only use mutation operators (i.e., steady-state EAs) both for standard unimodal (i.e., OneMax) and multimodal (i.e., Jump) benchmark functions. These upper bounds suggest that populations are beneficial to the GA as well as higher mutation rates than the default 1/n rate. However, making rigorous claims was not possible because matching lower bounds were not available. Proving lower bounds on crossover-based EAs is a notoriously difficult task as it is hard to capture the progress that a diverse population can make. We use a potential function approach to prove a tight lower bound on the expected runtime of the (2 + 1) GA for OneMax for all mutation rates c/n with c < 1.422. This provides the last piece of the puzzle that completes the proof that larger population sizes improve the performance of the standard steady-state GA for OneMax for various mutation rates, and it proves that the optimal mutation rate for the (2 + 1) GA on OneMax is [EQUATION].