Keywords

1 Introduction

Objective Structured Clinical Examination (OSCE) is an important assessment tool for clinical skills. It is often one of the main assessment components for clinical training in undergraduate medical programme (Davis 2003). OSCE has the strength in assessing wide domains in clinical learning, from history taking, physical examination, communication to ethics and professionalism. It is often accepted as a reliable assessment tool for competency-based assessment in the clinical phase of medical programme (Rethan et al. 2002).

In Taylor’s University School of Medicine, OSCE is the main assessment tool for the end-of-semester (EOS) summative continuous examination in phase 2 clinical training. Being a new programme in this university, currently only two cohorts of students are in clinical phase since September 2012 who have completed semester 5, 6 and 7 EOS OSCEs.

During OSCE, the candidates are required to go through a total of 16 stations with specific instructions given as to what they need to perform. The station may be history taking from a patient, performing specific physical examination, interpreting test results, explaining results to patient or providing education to patient. An examiner is placed in each station and assesses the candidate using a predetermined objective marking check list. The candidate is given only 6 min to complete the required task.

OSCE is the main clinical assessment tool; therefore it is important to evaluate the overall reliability of all OSCEs that have been conducted over the past 2 years. The overall reliability represents an estimation of the correlation of scores of these assessments with scores on hypothetical assessments which are composed of the entire portfolio of problems (Brannick et al. 2011). This entire portfolio of problems is usually common encounters in real-life clinical practice.

The objective of this paper is to answer two research questions: (1) What is the overall reliability of EOS OSCEs in phase 2 clinical assessment? (2) Are the various domains in clinical training adequately assessed within these OSCE stations?

2 Methodology

This is a retrospective review of the EOS assessment in phase 2 clinical programme from semesters 5 to 7. This study was conducted from 1 to 14 May 2014 over a period of 2 weeks. All the OSCE scores (marks) were retrieved from the academic records of the students in semesters 5, 6 and 7. The EOS 5, 6 and 7 OSCE papers were also retrieved from phase 2 clinical school academic office. Approval was obtained from the phase 2 programme director and the dean of the School of Medicine, Taylor’s University. All the OSCE papers were reviewed by two researchers and the domains assessed in each OSCE question were identified. Among the domains which were commonly assessed in OSCE were history taking, physical examination, performing a procedure, interpretation of laboratory test results, recognizing common electrocardiographs (ECGs), interpreting radiographic images, identifying common medical instruments and drugs, counselling patients and communication skills. The OSCE domains were tabulated using Microsoft Excel spreadsheet, and the content of all questions is reviewed. SPSS version 17 was used to generate the Cronbach alpha reliability. Ethics of confidentiality and justice were highly adhered to as the results are only handled by the researchers and the findings of the analysis would not lead to any alteration of results.

3 Results

All the EOS assessments in phase 2 clinical training were integrated components from various postings. In EOS 5, the postings assessed were internal medicine, surgery, paediatrics and obstetrics and gynaecology; EOS 6 assessed on family medicine, emergency medicine, orthopaedics, otorhinolaryngology and psychiatry, while EOS 7 assessed on ophthalmology, anaesthesiology and critical care, internal medicine 2 and surgery 2. Community medicine, laboratory medicine and radiology were also integrated into these EOS assessment components. With a total of 16 stations, the reliability values of OSCE in semesters 5, 6 and 7 were 0.65, 0.72 and 0.61, respectively (Table 1). These values are considered as acceptable reliability. The distribution of the domains assessed in semester 5, 6 and 7 OSCEs is shown in Table 2. The results show that there was a fair distribution of the domains tested across all the postings in all the EOS assessments. History taking, physical examination, psychomotor skills in performing procedures, the ability to interpret test results and communication skills were all fairly equally distributed in all the EOSs.

Table 1 Cronbach alpha reliability of EOS 5, 6 and 7 OSCEs
Table 2 Domains assessed across different subjects in EOS 5, 6 and 7 OSCEs

4 Discussion

The overall reliability of EOS OSCEs in phase 2 clinical training was above 0.6 which was acceptable. Majority of the published studies on OSCE showed reliability between 0.41 and 0.88 (Swanson and Norcini 1989). The reliability value can be improved further by increasing the number of questions in the assessment, but this will lead to increasing duration of the total assessment time. OSCE involved huge amount of manpower and preparation work; cost-effectiveness is another important factor to be considered. Although the reliability value that goes above 9.0 is considered excellent, there is a possibility of redundant testing on the same topics which does not bring any additional benefit. It will increase the cost of running unnecessary OSCE and also candidates’ and examiners’ fatigue (Barman 2005).

This study also demonstrated in end-of-semester 5, 6 and 7 OSCEs that all the necessary domains including knowledge, psychomotor skills and communication skills were adequately assessed. This is particularly important because our EOS is an integrated assessment of all clinical subjects; it is crucial to ascertain that the outcomes of all the domains were being assessed. These domains are the essential components of clinical training to ensure high competency of the students upon graduation as qualified and well-trained medical practitioners.

5 Conclusion and Recommendation

With a total of 16 OSCE stations in end-of-semester 5, 6 and 7 summative assessment in Taylor’s University Clinical School, the OSCE assessment is reliable. All the essential domains of clinical training are adequately assessed. This is a preliminary survey on the quality of assessment in clinical school. Further detailed analysis including reliability within the stations and future research to determine minimum stations required to achieve a good reliability for OSCE is necessary for a better quality clinical assessment in phase 2 clinical training.