DocumentCode :
1441261
Title :
Evolutionary algorithms and gradient search: similarities and differences
Author :
Salomon, Ralf
Author_Institution :
Dept. of Comput. Sci., Zurich Univ., Switzerland
Volume :
2
Issue :
2
fYear :
1998
fDate :
7/1/1998 12:00:00 AM
Firstpage :
45
Lastpage :
55
Abstract :
Classical gradient methods and evolutionary algorithms represent two very different classes of optimization techniques that seem to have very different properties. This paper discusses some aspects of some “obvious” differences and explores to what extent a hybrid method, the evolutionary-gradient-search procedure, can be used beneficially in the field of continuous parameter optimization. Simulation experiments show that on some test functions, the hybrid method yields faster convergence than pure evolution strategies, but that on other test functions, the procedure exhibits the same deficiencies as steepest-descent methods
Keywords :
conjugate gradient methods; convergence; genetic algorithms; search problems; continuous parameter optimization; convergence; evolutionary algorithms; evolutionary-gradient-search procedure; optimization; steepest-descent methods; Algorithm design and analysis; Convergence; Evolutionary computation; Genetic algorithms; Genetic mutations; Genetic programming; Gradient methods; Optimization methods; Optimized production technology; Testing;
fLanguage :
English
Journal_Title :
Evolutionary Computation, IEEE Transactions on
Publisher :
ieee
ISSN :
1089-778X
Type :
jour
DOI :
10.1109/4235.728207
Filename :
728207
Link To Document :
بازگشت