DocumentCode :
2222369
Title :
Analysis of Q-learning with random exploration for selection of auxiliary objectives in random local search
Author :
Buzdalov, Maxim ; Buzdalova, Arina
Author_Institution :
ITMO University, 49 Kronverkskiy av., Saint-Petersburg, Russia, 197101
fYear :
2015
fDate :
25-28 May 2015
Firstpage :
1776
Lastpage :
1783
Abstract :
We perform theoretical analysis for a previously proposed method of enhancing performance of an evolutionary algorithm with reinforcement learning. The method adaptively chooses between auxiliary objectives in a single-objective evolutionary algorithm using reinforcement learning. We consider the Q-learning algorithm with ε-greedy strategy (ε > 0), using a benchmark problem based on OneMax. For the evolutionary algorithm, we consider the Random Local Search. In our setting, OneMax problem should be solved in the presence of the obstructive ZeroMax objective. This benchmark tests the ability of the reinforcement learning algorithm to ignore such an inefficient objective. It was previously shown that in the case of the greedy strategy (ε = 0), the considered algorithm performs on the described benchmark problem in the best possible time for a conventional evolutionary algorithm. However, the ε-greedy strategy appears to perform in exponential time. Furthermore, every selection algorithm which selects an inefficient auxiliary objective with probability of at least δ is shown to be asymptotically inefficient when δ > 0 is a constant.
Keywords :
Algorithm design and analysis; Benchmark testing; Evolutionary computation; Learning (artificial intelligence); Optimization; Sociology; Statistics;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Evolutionary Computation (CEC), 2015 IEEE Congress on
Conference_Location :
Sendai, Japan
Type :
conf
DOI :
10.1109/CEC.2015.7257102
Filename :
7257102
Link To Document :
بازگشت