A few words about this paper…
After the discovery that about 17 different styles of communications skills are used in the field of communication skills training in medical education, it was apparent we needed to validate the communication skills items included in OSCE checklists. Within our own School of Medicine, in the College of Medicine, Nursing and Health Sciences of the National University of Ireland in Galway, about 280 OSCE stations assessment forms throughout 4 years, and from 4 different medical specialties contained a variety of communication skills items. None of these were ever validated using existing reliable and valid Communication Skills Questionnaires.
The MAAS-Global, developed at the University of Maastricht in the Netherlands, is one such validated questionnaire. There are more questionnaires which have been shown to be valid and reliable, but for various reasons we chose the MAAS-Global and asked three independent raters to rate all of the 280 assessment forms to indicate how each of the items matched with the 17 items across the three sections of the MAAS-Global. The overall Generalisability Kappa (G-Kappa) appeared to be 0.80, which is high in the case of using three raters, whereas it is still satisfactory when two raters are involved (0.72).
If just one educationalist would rate communication skills items on the MAAS-Global, the G-Kappa is unacceptably low at 0.46. The majority of our communication skills (46%) correspond with medical content of the consultation (Section 3 of the MAAS-Global). Only 12% are general communication skills items (Section 2), and 8.2% are linked to each separate phase of the consultation (Section 1). The other 34% of the items on the assessment form were not considered to be related to any communication skills and were probably to be classified as clinical skills items.
We presume that our medical school does not differ radically from other medical schools. In the case that one purpose of medical training is to teach exchangeable and transferable skills, you would expect that communication, and other clinical skills, are assessed and marked in the same manner at least within Europe. We strive to ensure student’s results are comparable within and between stations, students and across institutions, certainly on an international level. Apart from the insights we achieved in our local communication skills assessments, it is good to know that only two raters are required to validate local communications skills assessment forms within any other institution.
Winny Setyonugroho, Thomas J.B. Kropmans, Kieran M. Kennedy, Brian Stewart, Jan van Dalen
Communication skills (CS) are commonly assessed using ‘communication items’ in Objective Structured Clinical Examination (OSCE) station checklists. Our aim is to calibrate the communication component of OSCE station checklists according to the MAAS-Global which is a valid and reliable standard to assess CS in undergraduate medical education.
Three raters independently compared 280 checklists from 4 disciplines contributing to the undergraduate year 4 OSCE against the 17 items of the MAAS-Global standard. G-theory was used to analyze the reliability of this calibration procedure.
G-Kappa was 0.8. For two raters G-Kappa is 0.72 and it fell to 0.57 for one rater. 46% of the checklist items corresponded to section three of the MAAS-Global (i.e. medical content of the consultation), whilst 12% corresponded to section two (i.e. general CS), and 8.2% to section one (i.e. CS for each separate phase of the consultation). 34% of the items were not considered to be CS.
A G-Kappa of 0.8 confirms a reliable and valid procedure for calibrating OSCE CS checklist items using the MAAS-Global. We strongly suggest that such a procedure is more widely employed to arrive at a stable (valid and reliable) judgment of the communication component in existing checklists for medical students’ communication behaviours.
It is possible to measure the ‘true’ caliber of CS in OSCE stations. Students’ results are thereby comparable between and across stations, students and institutions. A reliable calibration procedure requires only two raters.
Click below for article full text…
Click here to provide feedback on the paper.