DocumentCode :
3725424
Title :
An approach to automatic evaluation of higher cognitive levels assessment items
Author :
Shilpi Banerjee;Chandrashekar Ramanathan;N.J. Rao
Author_Institution :
International Institute of Information Technology, Bangalore, India
fYear :
2015
Firstpage :
342
Lastpage :
347
Abstract :
Large-scale assessments assess relatively large numbers of students. One of the biggest limitations/ challenges in MOOC today is conducting effective assessments in a large-scale environment. The quality of large-scale assessment is under threat from multiple sources including assessment instrument specific and measurement errors. Assessment instrument specific errors are related to the extent to which assessments meet its objectives while measurement errors are incurred during the process of evaluation. A survey of sample of existing assessment instruments used for large scale assessments is conducted to identify assessment instrument specific errors. In this paper, we have proposed the usage of technology to build electronic item banks for avoiding assessment instrument specific and measurement errors, thereby improving the quality of assessments. We have proposed 12 unique item types which are amenable for automatic evaluation. The process of evaluating students response automatically is discussed in detail for each item type. These automated item types provides cost effective ways for achieving validity and reliability for large scale assessments.
Keywords :
"Instruments","Connectors","Measurement errors","Reliability","Programming profession","Manuals"
Publisher :
ieee
Conference_Titel :
MOOCs, Innovation and Technology in Education (MITE), 2015 IEEE 3rd International Conference on
Type :
conf
DOI :
10.1109/MITE.2015.7375342
Filename :
7375342
Link To Document :
بازگشت