One of the biggest difficulties that can be experienced when a university attempts to introduce summative computerised assessment is the problem of time and space.
Computerised summative assessment turns the workload balance associated with exams on its head. Traditionally marking finished exam papers was where the time costs were heaviest in the process for lecturers. The majority of the work effectively manifested at the end rather than during the development of the assessment itself. Computerised exams break this paradigm. Here the time costs are front loaded. Writing, building, testing and re-testing is the most work intensive phase of an assessment’s lifespan. The actual marking is done by the computer and results can be available almost instantaneously. No more weeks buried under piles of exam papers.
As a consequence the big payoff that can be achieved with computerised assessments is dependent on the number of students taking an exam. All of the work done upfront constructing an assessment will be rewarded based on the volume of marking it eliminates. We have moved into the sphere of economies of scale. The restrictions of time and space are the major barriers to maximising this potential. Many universities do not have the computer suites required to accommodate the volume of students required to achieve big productivity gains while others may have the problem of finding times when facilities are not being used for other purposes such as teaching or revision sessions. The argument for using computerised assessments weakens under such circumstances as it become difficult to justify the amount of work required during development.
At Harper Adams University we have overcome these challenges using optical mark recognition (OMR) technology. This method combines traditional paper-based answer sheets with computerised marking rather than having students’ take exams directly via the computer. Students complete their answers to multiple-choice questions by filling-in bubbles on an exam paper. This can be done in a standard exam hall without the need for any computer facilities. Once the exam has concluded the papers are collated and fed through a multi-sheet Fujitsu image scanner. They are then uploaded into the university’s Questionmark Perception installation and a report generated that automatically marks the students’ answers. Using this method we have used computerised assessment with over 240 students in one exam. Inevitably students make all manner of weird and wonderful errors when filling-in the papers but any issues are identified and addressed during a follow-up with the primary lecturer. This rarely takes longer than an hour. Marking time costs are cut dramatically.
Using multiple choice exams does receive criticism by some authors. These suggest that these kind of assessments fail to test the deeper understanding required by higher education. However, at Harper Adams we have conducted extensive analysis of our students’ results using measures such as Cronbach’s Alpha. Based on the outcome we are confident that it is possible to write questions that test this level of knowledge. Obviously there are some more subjective subjects where this type of assessment will be inappropriate and it would be a mistake to force such round pegs into square holes. But with careful planning and a thorough understanding of the principles of assessment such problems should be avoided.
Looking to the future we have yet to find the maximum volume possible with this system but this year we plan to increase numbers even higher. There is no longer a need to fear the barriers of time and space.