Title :
Evolutionary algorithms and gradient search: similarities and differences
Author_Institution :
Dept. of Comput. Sci., Zurich Univ., Switzerland
fDate :
7/1/1998 12:00:00 AM
Abstract :
Classical gradient methods and evolutionary algorithms represent two very different classes of optimization techniques that seem to have very different properties. This paper discusses some aspects of some “obvious” differences and explores to what extent a hybrid method, the evolutionary-gradient-search procedure, can be used beneficially in the field of continuous parameter optimization. Simulation experiments show that on some test functions, the hybrid method yields faster convergence than pure evolution strategies, but that on other test functions, the procedure exhibits the same deficiencies as steepest-descent methods
Keywords :
conjugate gradient methods; convergence; genetic algorithms; search problems; continuous parameter optimization; convergence; evolutionary algorithms; evolutionary-gradient-search procedure; optimization; steepest-descent methods; Algorithm design and analysis; Convergence; Evolutionary computation; Genetic algorithms; Genetic mutations; Genetic programming; Gradient methods; Optimization methods; Optimized production technology; Testing;
Journal_Title :
Evolutionary Computation, IEEE Transactions on
DOI :
10.1109/4235.728207