• DocumentCode
    868882
  • Title

    Making inferences with small numbers of training sets

  • Author

    Kirsopp, C. ; Shepperd, M.

  • Author_Institution
    Empirical Software Eng. Res. Group, Bournemouth Univ., UK
  • Volume
    149
  • Issue
    5
  • fYear
    2002
  • fDate
    10/1/2002 12:00:00 AM
  • Firstpage
    123
  • Lastpage
    130
  • Abstract
    A potential methodological problem with empirical studies that assess project effort prediction system is discussed. Frequently, a hold-out strategy is deployed so that the data set is split into a training and a validation set. Inferences are then made concerning the relative accuracy of the different prediction techniques under examination. This is typically done on very small numbers of sampled training sets. It is shown that such studies can lead to almost random results (particularly where relatively small effects are being studied). To illustrate this problem, two data sets are analysed using a configuration problem for case-based prediction and results generated from 100 training sets. This enables results to be produced with quantified confidence limits. From this it is concluded that in both cases using less than five training sets leads to untrustworthy results, and ideally more than 20 sets should be deployed. Unfortunately, this raises a question over a number of empirical validations of prediction techniques, and so it is suggested that further research is needed as a matter of urgency.
  • Keywords
    software development management; case-based prediction; configuration problem; empirical validations; hold-out strategy; inferences; methodological problem; prediction techniques; project effort prediction system; sampled training sets; validation set;
  • fLanguage
    English
  • Journal_Title
    Software, IEE Proceedings -
  • Publisher
    iet
  • ISSN
    1462-5970
  • Type

    jour

  • DOI
    10.1049/ip-sen:20020695
  • Filename
    1049201