DocumentCode :
2778060
Title :
Separating distribution-free and mistake-bound learning models over the Boolean domain
Author :
Blum, Avrim
Author_Institution :
MIT Lab. for Comput. Sci., Cambridge, MA, USA
fYear :
1990
fDate :
22-24 Oct 1990
Firstpage :
211
Abstract :
Two of the most commonly used models in computational learning theory are the distribution-free model, in which examples are chosen from a fixed but arbitrary distribution, and the absolute mistake-bound model, in which examples are presented in order by an adversary. Over the Boolean domain {0,1}n, it is known that if the learner is allowed unlimited computational resources, then any concept class learnable in one model is also learnable in the other. In addition, any polynomial-time learning algorithm for a concept class in the mistake-bound model can be transformed into one that learns the class in the distribution-free model. It is shown that if one-way functions exist, then the converse does not hold. The author presents a concept class over {0.1}n that is learnable in the distribution-free model but is not learnable in the absolute mistake-bound model if one-way functions exist. In addition, the concept class remains hard to learn in the mistake-bound model, even if the learner is allowed a polynomial number of membership queries
Keywords :
Boolean functions; computational complexity; learning systems; set theory; Boolean domain; absolute mistake-bound model; adversary; computational learning theory; concept class; distribution-free model; membership queries; model separation; one-way functions; ordered examples; polynomial-time learning algorithm; unlimited computational resources; Cryptography; Distributed computing; Heart; Laboratories; Polynomials; Voting;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Foundations of Computer Science, 1990. Proceedings., 31st Annual Symposium on
Conference_Location :
St. Louis, MO
Print_ISBN :
0-8186-2082-X
Type :
conf
DOI :
10.1109/FSCS.1990.89540
Filename :
89540
Link To Document :
بازگشت