Title of article :
Marking essays on screen: An investigation into the
reliability of marking extended subjective texts
Author/Authors :
Martin Johnson، نويسنده , , Rita N?das and John F. Bell، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2010
Abstract :
There is a growing body of research literature that considers how the mode of
assessment, either computer-based or paper-based, might affect candidates’
performances. Despite this, there is a fairly narrow literature that shifts the
focus of attention to those making assessment judgements and which considers
issues of assessor consistency when dealing with extended textual answers
in different modes. This research project explored whether the mode in which
a set of extended essay texts were accessed and read systematically influenced
the assessment judgements made about them. During the project, 12 experienced
English literature assessors marked two matched samples of 90 essay
exam scripts on screen and on paper. A variety of statistical methodswere used
to compare the reliability of the essay marks given by the assessors across
modes. It was found that mode did not present a systematic influence on
marking reliability. The analyses also compared examiners’ marks with a gold
standard mark for each essay and found no shifts in the location of the standard
of recognised attainment across modes.