Title of article :
Are multiple objective measures of student performance necessary?
Author/Authors :
David J. Minion، نويسنده , , Michael B. Donnelly، نويسنده , , Rhonda C. Quick، نويسنده , , Andrew Pulito، نويسنده , , RICHARD SCHWARTZ، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2002
Pages :
3
From page :
663
To page :
665
Abstract :
Background: This study examines the effect of using multiple modalities to evaluate medical students. Methods: Thirty-four students were evaluated by a complex model utilizing National Board of Medical Examiners (NBME) shelf examination, Objective Structured Clinical Examination (OSCE), Computer Patient Simulation (CPS), faculty and peer evaluation. Results were compared with a traditional model based on NBME and faculty evaluation alone. Results: Reliability (coefficient α) of the complex and traditional models were 0.72 and 0.47, respectively. Item correlations suggested that NBME was most discriminating (r = 0.75), followed by OSCE (r = 0.52), peer evaluation (r = 0.43), CPS (r = 0.39), and faculty evaluation (r = 0.32). Rank order correlation (Spearman’s ρ) between scores calculated using each model was 0.87. Conclusions: Although the complex model has improved reliability, both models rank students similarly. However, neither model fully captures and reflects the information provided by each of the specific evaluation methods.
Keywords :
evaluation , National Board of Medical Examiners , computer simulation , Objective Structured Clinical Examination , Undergraduatemedical education , grading
Journal title :
The American Journal of Surgery
Serial Year :
2002
Journal title :
The American Journal of Surgery
Record number :
621427
Link To Document :
بازگشت