• DocumentCode
    3333618
  • Title

    A Machine Learning Feature Reduction Technique for Feature Based Knowledge Systems

  • Author

    Yeung, Daniel Dr

  • Author_Institution
    Professor, The Hong Kong Polytechnic University, Kowloon, Hong Kong. csdaniel@inet.polyu.edu.hk
  • fYear
    2007
  • fDate
    13-15 Aug. 2007
  • Abstract
    Generalization error model provides a theoretical support for a pattern classifier\´s performance in terms of prediction accuracy. However, existing models give very loose error bounds. This explains why classification systems generally rely on experimental validation for their claims on prediction accuracy. In this talk we will revisit this problem and explore the idea of developing a new generalization error model based on the assumption that only prediction accuracy on unseen points in a neighbourhood of a training point will be considered, since it will be unreasonable to require a pattern classifier to accurately predict unseen points "far away" from training samples. The new error model makes use of the concept of sensitivity measure for a multiplayer feedforward neural network (Multilayer Perceptron or Radial Basis Function Neural Network). It could be demonstrated that any knowledgebase system represented by a set of features may be simplified by reducing its feature set using such a model. A number of experimental results using datasets such as the UCI and the 99 KDD Cup will be presented.
  • Keywords
    Accuracy; Chapters; Cybernetics; Knowledge based systems; Machine learning; Mathematics; Predictive models; Sliding mode control; Societies; Speech;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Information Reuse and Integration, 2007. IRI 2007. IEEE International Conference on
  • Conference_Location
    Las Vegas, NV, USA
  • Print_ISBN
    1-4244-1500-4
  • Electronic_ISBN
    1-4244-1500-4
  • Type

    conf

  • DOI
    10.1109/IRI.2007.4296580
  • Filename
    4296580