This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. (April 2009) |
Fitness approximation[1] aims to approximate the objective or fitness functions in evolutionary optimization by building up machine learning models based on data collected from numerical simulations or physical experiments. The machine learning models for fitness approximation are also known as meta-models or surrogates, and evolutionary optimization based on approximated fitness evaluations are also known as surrogate-assisted evolutionary approximation.[2] Fitness approximation in evolutionary optimization can be seen as a sub-area of data-driven evolutionary optimization.[3]
Approximate models in function optimization
editMotivation
editIn many real-world optimization problems including engineering problems, the number of fitness function evaluations needed to obtain a good solution dominates the optimization cost. In order to obtain efficient optimization algorithms, it is crucial to use prior information gained during the optimization process. Conceptually, a natural approach to utilizing the known prior information is building a model of the fitness function to assist in the selection of candidate solutions for evaluation. A variety of techniques for constructing such a model, often also referred to as surrogates, metamodels or approximation models – for computationally expensive optimization problems have been considered.
Approaches
editCommon approaches to constructing approximate models based on learning and interpolation from known fitness values of a small population include:
- Low-degree polynomials and regression models
- Fourier surrogate modeling[4]
- Artificial neural networks including
Due to the limited number of training samples and high dimensionality encountered in engineering design optimization, constructing a globally valid approximate model remains difficult. As a result, evolutionary algorithms using such approximate fitness functions may converge to local optima. Therefore, it can be beneficial to selectively use the original fitness function together with the approximate model.
See also
edit- A complete list of references on Fitness Approximation in Evolutionary Computation, by Yaochu Jin.
- The cyber shack of Adaptive Fuzzy Fitness Granulation (AFFG) Archived 2021-12-06 at the Wayback Machine That is designed to accelerate the convergence rate of EAs.
- Inverse reinforcement learning
- Reinforcement learning from human feedback
References
edit- ^ Y. Jin. A comprehensive survey of fitness approximation in evolutionary computation. Soft Computing, 9:3–12, 2005
- ^ Surrogate-assisted evolutionary computation: Recent advances and future challenges. Swarm and Evolutionary Computation, 1(2):61–70, 2011
- ^ Y. Jin, H. Wang, T. Chugh, D. Guo and K. Miettinen. Data-driven evolutionary optimization -- An Overview and Case Studies or black-box optimization. 23(3):442-459, 2019
- ^ Manzoni, L.; Papetti, D.M.; Cazzaniga, P.; Spolaor, S.; Mauri, G.; Besozzi, D.; Nobile, M.S. Surfing on Fitness Landscapes: A Boost on Optimization by Fourier Surrogate Modeling. Entropy 2020, 22, 285.