DocumentCode :
2443602
Title :
Optimization is easy and learning is hard in the typical function
Author :
English, Thomas M.
Author_Institution :
The Tom English Project, Lubbock, TX, USA
Volume :
2
fYear :
2000
fDate :
2000
Firstpage :
924
Abstract :
Elementary results in algorithmic information theory are invoked to show that almost all finite functions are highly random. That is, the shortest program generating a given function description is rarely much shorter than the description. It is also shown that the length of a program for learning or optimization poses a bound on the algorithmic information it supplies about any description. For highly random descriptions, success in guessing values is essentially accidental, but learning accuracy can be high in some cases if the program is long. Optimizers, on the other hand, are graded according to the goodness of values in partial functions they sample. In a highly random function, good values are as common and evenly dispersed as bad values, and random sampling of points is very efficient
Keywords :
algorithm theory; information theory; learning (artificial intelligence); optimisation; algorithmic information theory; finite functions; function description; highly random description; learning; learning accuracy; optimization; partial functions; random point sampling; Algorithm design and analysis; Information theory; Performance analysis; Sampling methods;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Evolutionary Computation, 2000. Proceedings of the 2000 Congress on
Conference_Location :
La Jolla, CA
Print_ISBN :
0-7803-6375-2
Type :
conf
DOI :
10.1109/CEC.2000.870741
Filename :
870741
Link To Document :
بازگشت