For decades in undergraduate medical, dental and health science curricula, clinical competence and Multiple Mini Interviews have been assessed using Objective Structured Clinical Examinations (OSCEs) (Harden, 1988; Harden & Gleeson, 1979). Although OSCEs and MMI’s are known to be robust and useful assessments, the paper trail is laborious and expensive (Cusimano et al., 1994; Frye, Richards, Philp, & Philp, 1989).
Estimations of the development and administration of, a six-station OSCE report 327.5 hours of staff and faculty time for each rotation of students. That equates to 8.2 hours of staff involvement per student (Cusimano et al., 1994). The implementation required 110 hours of staff and faculty time (2.75 hours per student). According to Cussimano, direct expenses for the OSCE amounted to US$ 6.90, equivalent to €4.70 per student per station (Cusimano et al., 1994).
Our medical school administers on average eleven, 7-12 station OSCEs for a cohort of 670 students, and produces 9,380 assessment forms over the curriculum. To produce final OSCE results, the administrative cost of this procedure is €29,500, which is €2.80 per paper form. Attempts have been made to streamline OSCE administrative processes.
Currently, quite a few prestigious Universities and Professional Bodies are using our system Qpercom Observe for retrieval, storage and analysis of OSCE and MMIs stations. On average the cost per electronic transition is 70% cheaper and much faster than the traditional paper trail.
Improves validity and inter-examiners variability
Qpercom Observe produces an online analysis of items and overall total (raw) scores and adjusted (raw) scores using standard setting of student performance after regression analysis. The mean result, standard deviation (SD), minimum and maximum, and range and mid range are produced instantly, in real time, during the examination. Internal consistency of OSCE station item forms (Cronbach’s Alpha) is used to provide insight into the consistency of items in each station, predicting the overall score of the student of that specific station. Borderline Regression Analysis (Borderline Group Average versus Borderline Regression Method) calculates a ‘flexible cut-off score’ complementary to the general static ‘standard’ cut score for each individual station.
All analysis reports and data can be exported to Excel to facilitate further detailed analysis. Data can be exported to perform a Generalizability Coefficient analysis using a G- and D-study with EduG software. We use the Standard Error of Measurement (SEM) for all of our supported assessments. The G-study generates information about whether the outcome can be generalised to other medicine OSCEs. The D-study provides information on how the generalizability of results can be improved.
Increases accuracy and improves feedback
Traditionally OSCEs have been assessed with paper-based methods. However, a number of issues have been highlighted with this method including illegible handwriting, missing details (students’ names and student numbers) and lost assessment sheets . Furthermore, it is known that manual calculation of results and entering them into a database is time-consuming and is subject to human errors. In addition, feedback is rarely provided to students on their performance after paper-based assessments.
Feedback is the most powerful learning tool in Medical Education. However, a lot of students have to progress without it. It is just too labor intensive to provide immediate feedback on performance in an OSCE or MMI if there are only paper forms. Qpercom Observe includes an instant student feedback email system that allows module coordinators to send global or detailed feedback to students. The student email feedback feature offers various options to send written feedback, with or without item details, Global ratings etc.
GET IN TOUCH
Unit 9B, Galway Technology Centre,
Mervue Business Park, Wellpark Rd,
Galway, Ireland H91 PY93
IE: +353 91395416
UK: +44 2033184998