• DocumentCode
    2781027
  • Title

    A Markovian extension of Valiant´s learning model

  • Author

    Aldous, David ; Vazirani, Umesh

  • Author_Institution
    California Univ., Berkeley, CA, USA
  • fYear
    1990
  • fDate
    22-24 Oct 1990
  • Firstpage
    392
  • Abstract
    A model of learning that expands on the Valiant model is introduced. The point of departure from the Valiant model is that the learner is placed in a Markovian environment. The environment of the learner is a (exponentially large) graph, and the examples reside on the vertices of the graph, one example on each vertex. The learner obtains the examples while performing a random walk on the graph. At each step, the learning algorithm guesses the classification of the example on the current vertex using its current hypothesis. If its guess is incorrect, the learning algorithm updates its current working hypothesis. The performance of the learning algorithm in a given environment is judged by the expected number of mistakes made as a function of the number of steps in the random walk. The predictive value of Occam algorithms under this weaker probabilistic model of the learner´s environment is studied
  • Keywords
    Markov processes; learning systems; Markovian extension; Occam algorithms; Valiant learning model; classification; performance; random walk; weaker probabilistic model; Classification algorithms; Predictive models; Probability distribution;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Foundations of Computer Science, 1990. Proceedings., 31st Annual Symposium on
  • Conference_Location
    St. Louis, MO
  • Print_ISBN
    0-8186-2082-X
  • Type

    conf

  • DOI
    10.1109/FSCS.1990.89558
  • Filename
    89558