Abstract
High-stakes (national) objective structured clinical examination (OSCE) is a vital tool to assess clinical competence in medical students. Formative OSCE to the learner can narrow the gap between actual and desired performance. This study aimed to explore the outcome of simple formative OSCE before high-stakes OSCE. This quasi-experimental design study analyzed the passing status and the score of high-stakes OSCE (P2 OSCE) compared to local formative OSCE (P1 OSCE) after being given feedback on the P1 OSCE. Formative written feedback was given regarding the positive and negative performances of the students during the examination and the suggestion for improvement of high-stakes OSCE. This study was conducted on a total of 520 students. There were 98 students (18.8%) who failed at the P1 OSCE but passed the P2 OSCE. Only five students (1%) who failed at P1 OSCE failed at P2 OSCE. There were significant differences (P-value <0.001, Wilcoxon signed-rank test) between the P1 OSCE score and the P2 OSCE score. There was a significantly higher enrichment of the learning process for students to improve their performances and give them self-reflections apart from a pass/fail environment. Formative feedback in local formative OSCE before national high-stakes OSCE could increase the percentage of passing status and score in high-stakes OSCE.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
National medical competency examinations have been conducted to evaluate medical school education. The Indonesian Medical Doctor National Competency Examination (IMDNCE) is a national medical competency exit exam established in 2014 [1] The IMDNCE consists of multiple-choice questions using computer-based testing methods (MCQs-CBT) and an Objective Structured Clinical Examination (OSCE). This high-stakes OSCE is an essential tool for assessing clinical skill competency [1]. It is designed to evaluate students' history-taking skills, physical examination, communication, and professionalism. The OSCE has been used worldwide to teach and assess learners' competencies, particularly in healthcare [1, 2].
Formative OSCEs are assessments for learning, and it does not determine whether the students pass or fail. It is conducted to make the student familiar with the concept of OSCEs. They are unlike summative OSCEs, which are learning assessments that count toward a grade and formally assess clinical skills and knowledge required for graduation [3]. Formative OSCEs contribute positively to final summative examination performance [4]. During formative OSCEs, students are given the experience of interacting with standardized patients and their teachers at each station [5].
Although OSCE is well-established as an effective assessment tool for clinical competence, feedback after OSCE is vital in improving the student's clinical skills. This impact is not only for assessment but also can be used for learning purposes when it is accompanied by proper feedback. Feedback is an essential element of the educational process [6]. Feedback is a continual process between the teacher and student [7]. Good feedback practices may strengthen students' self-regulative ability [6]. Formative feedback gives information to the learner to improve their learning [8]. Formative feedback in OSCE to the learner can narrow the gap between actual and desired performance. Feedback is a fundamental learning tool in medical education, whereas excellent and effective feedback enhances student motivation and satisfaction [8].
Feedback forms can range from a simple judgment of correctness, identification of parts that could be improved, and also inviting ideas [9]. While formative feedback can be verbal or written, written feedback can take the form of comments, questions, corrections, and others to develop student understanding and provide a correction [9].
Due to the nature of the formative feedback, some students might see formative feedback as the only way to do better at summative assessment, thus not taking the formative assessment seriously without the thought that formative feedback is essential to help students reach the milestone of performance and narrowing the gap between actual and desired performances [10]. Although formative feedback is underutilized due to the complexity and minor variance, its benefit in highlighting the drawback of a curriculum or educational program is significant [10]. This study aimed to explore the outcome of simple feedback in national high-stakes OSCE by analyzing the passing status and the score of high-stakes OSCE (post-feedback; P2) compared to local formative OSCE (pre-feedback; P1) after being given the feedback in P1 OSCE.
2 Methods
This study was conducted with a quasi-experimental design on final-year medical doctor (MD) profession students of Atma Jaya Catholic University of Indonesia from 2017 until 2019 who undertook the P2 OSCE. Before the P2 OSCE was conducted, the P1 OSCE from the faculty was done to assess the readiness of the MD students. Both examinations were conducted with the same method. Each student was required to complete an entire cycle of 12 stations in a single session of both P1 and P2 OSCE. The topic of each station varied about a clinical case or procedure, and the examiner might not usually be an expert on the subject. The examiners were general practitioners with minimal qualifications of Master's degrees or specialists.
The examiner for each OSCE station had to follow the specific scoring sheet (actual mark) from the grading format checklist and observe the student's overall performance sheet (global rating) in the station for borderline regression purposes as a standard-setting procedure. The scoring sheet consists of three to eight criteria (history taking, physical examination, laboratory workup, differential and working diagnosis, pharmacotherapy plan, non-pharmacotherapy plan, communication, education, or professionalism) to assess specific clinical skills of each scenario. The global rating was scored as: (4) superior performance; (3) passing performance; (2) borderline performance; and (1) not-pass performance. Both P1 OSCE and P2 OSCE used the same grading format.
The borderline regression method is one of the latest methods to evaluate students' performances at each station by completing a checklist and a global rating scale. The checklist mark from examinees from each station is then regressed on the attributed global rating scores, providing a linear equation. The global score representing borderline performance (e.g., 2 on the global performance rating scale) is substituted to predict the pass-fail cut score for the checklist marks.
The formative written feedback was given in P1 OSCE regarding the positive and negative performances of the students during the examination and the suggestions for improvement for P2 OSCE. These written formative feedbacks were collected from each station and were sorted and put together based on the student's ID. It was given to each student individually approximately two weeks before the P2 OSCE, and the students were given an optional practice period until one week before the P2 OSCE. The feedback was meant to give the students the self-reflections needed before the P2 OSCE to improve their performances.
Statistical analysis was performed using STATA version 14.1 (STATA Corp., College Station, TX, USA). The differences in the P2 passing rate between each batch were compared using Fisher's exact probability test. The passing status frequency of the P1 and P2 OSCE results were compared using Fisher's exact probability test. The score of P1 and P2 OSCE was compared using the Wilcoxon signed-rank test. A p-value <0.05 was appraised as statistically significant.
3 Results
This study was conducted on 520 MD students from 10 batches of P2 OSCE. There were no significant differences between each batch of the P2 OSCE (Table 1).
There were 98 students who failed the P1 OSCE but passed the P2 OSCE, while only five students who failed P1 OSCE also failed the P2 OSCE. Only one student passed the P1 OSCE but failed the P2 OSCE (Table 2).
There was a significantly higher median in the score of P2 OSCE (80.56 out of 100) than P1 OSCE (71.56 out of 100) (p-value < 0.001) (Fig. 1). The positive predictive value (PPV) of P1 OSCE as a predictor of passing P2 OSCE was 99.76%, and the negative predictive value (NPV) of P1 OSCE as a predictor of failing P2 OSCE was 4.85%.
4 Discussions
The results of this study showed that the formative OSCE was beneficial for the student in their readiness to take the summative OSCE. Formative OSCE before the summative OSCE had a positive effect on increasing students' scores. Our findings may support the implementation of formative OSCEs to prepare students for high-stakes summative OSCEs. There were significant differences (p-value < 0.001, Wilcoxon signed-rank test) between the formative OSCE scores and high-stakes OSCE scores. Formative OSCE contributes positively to final summative examination performance [4]. During formative OSCE, students are given the experience of interacting with standardized patients and their teachers at each station [5].
This study analyzes MD students' improvement after written formative feedback from formative OSCE to high-stakes (summative national) OSCE. After giving written feedback about their weaknesses and strengths, the students had two weeks to prepare for the high-stakes OSCE. Formative feedback can enrich the learning process for students to improve their performances and give self-reflections apart from a pass/fail environment. It is essential to give time for them to encourage their self-assessment and reflection on strengths and areas that need improvement [11]. Formative feedback in OSCE to the learner can narrow the gap between actual and desired performance. Feedback is a fundamental learning tool in medical education, whereas an excellent and effective feedback enhances student motivation and satisfaction [8].
Feedback effectiveness is influenced by many factors, such as the characteristic of feedback sources (stringency/leniency), the feedback recipient, and the message itself [12, 13]. Rebel and colleagues [14] stated that removing pass/fail results can give students specific input on their skill performances to narrow the gap between actual and desired performances. The formative feedback from a clinically experienced physician could help the students to improve their skills in history taking, physical examination, communication, and especially clinical reasoning [2]. A study by Alkhateeb and colleagues [15] found that a single P1 OSCE does not increase and change the pass rate of P2 OSCE. This finding is discordant with our findings that showed there was an increase in the passing rate and the score of P2 compared to P1 OSCE. The difference in results might be because of the selection bias in the study by Alkhateeb because the participation of formative-OSCE was voluntary. Our study has a similar result to the study by Chisnall [16], which showed an increase in pass rate and scores. A previous study by Bandiera [17] also showed the positive effect of feedback of past examination performances on future performances. This finding might be due to the potential of formative feedback, which provides the student with a rich and meaningful learning experience [10].
The good PPV but poor NPV can imply that the students who passed P1 OSCE were more likely to pass the P2 OSCE, while those who failed the P1 OSCE were less likely to fail the P2 OSCE. This finding showed that P1 OSCE improves performance and prevents failure in P2 OSCE. The students also perceived the P1 OSCE as a positive and valuable activity that could help their preparation for P2 OSCE [16].
This study did not analyze the effectiveness of the feedback to the students and assess each feedback given to improve the performance in P2 OSCE [12, 13]. This is a limitation of this study, although many publications support the finding of this study. The design of this study also does not control whether the examiners on P1 and P2 OSCE were generalists or specialists.
As a feedback provider, the teacher has an essential role in encouraging learners' self-assessment and reflection, so the examiner's background should be in accordance with the clinical topic of the examination station [4]. The study conducted by Perron [2] found that the quality of feedback during formative OSCEs depends on the tutor's profile. Specialists reported less training in providing feedback than generalists, whereas generalists are more learner-centered and pay more attention to communication and professionalism during feedback.
Medical students may receive feedback in various clinical and non-clinical contexts where opportunities for feedback on clinical competencies are valuable [18]. Many studies reported face-to-face feedback from the examiner immediately after the OSCE with additional time allocation, audio or video recordings of the examiner providing generic or personalized feedback, and written OSCE feedback [19, 20]. Ngim [18] reported that medical students prefer written feedback compared to face-to-face feedback. In our study, the written feedback was unstructured or lacked guidelines to create high-quality feedback. It caused a wide variance between examiners [21], which becomes another limitation of this study. The excellent quality of feedback should be specific, balanced, and constructive and should describe the gap in student learning and observed behavioral actions in the exams [2].
The primary goal of formative low-stakes assessment is to support learners' progress. There are different perceptions about low-stakes assessment from the learner and teacher's points of view. Learners often do not appreciate the value of low-stakes assessments to guide their learning. If teachers do not fully understand the meaning and purpose of assessment, low-stakes assessments and their potential learning benefits are useless [22, 23] Van der Vleuten and Schuwirth proposed a programmatic assessment model to provide a holistic overview of students' competency development for formative feedback and summative decisions [24]. Multiple low-stakes assessments with constructive feedback can inform high-stakes performance decisions that have substantial consequences for learners [22]. Bok [24] found that the programmatic assessment is not easy to implement. It needs to train the students and supervisors to provide assessment and feedback. Hence, they both have the same perception about the function of low-stakes assessment and the feedback in the learning process and its contribution to high-stakes decisions. Enhancing feedback quality is also needed as the strategy for better implementation of programmatic assessment, such as using modern technology or scoring rubrics on the assessment form [24].
5 Conclusions
Assessment and feedback are essential components of medical education. Formative OSCE before the summative OSCE had a positive effect on increasing students' scores and passing status. Our study found that simple formative feedback in formative OSCE before summative OSCE could increase the percentage of passing rates and scores in summative OSCE. Structured written feedback is still needed to narrow variance feedback between examiners.
Abbreviations
- IMDNCE:
-
Indonesian Medical Doctor National Competency Examination
- MCQs-CBT:
-
Multiple-Choice Questions Using Computer-Based Testing Methods
- OSCE:
-
Objective Structured Clinical Examination
- MD:
-
Medical Doctor
References
Utomo PS, Randita ABT, Riskiyana R, Kurniawan F, Aras I, Abrori C et al (2022) Predicting medical graduates’ clinical performance using national competency examination results in Indonesia. BMC Med Educ 22:254. https://doi.org/10.1186/s12909-022-03321-x
Junod Perron N, Louis-Simonet M, Cerutti B, Pfarrwaller E, Sommer J, Nendaz M (2016) The quality of feedback during formative OSCEs depends on the tutors’ profile. BMC Med Educ 16:293. https://doi.org/10.1186/s12909-016-0815-x
Blamoun J, Hakemi A, Armstead T (2021) A guide for medical students and residents preparing for formative, summative, and virtual objective structured clinical examination (OSCE): twenty tips and pointers. Adv Med Educ Pract 12:973–978. https://doi.org/10.2147/AMEP.S326488
Townsend AH, McLlvenny S, Miller CJ, Dunn EV (2001) The use of an objective structured clinical examination (OSCE) for formative and summative assessment in a general practice clinical attachment and its relationship to final medical school examination performance. Med Educ 35:841–846. https://doi.org/10.1046/j.1365-2923.2001.00957.x
Lien H-H, Hsu S-F, Chen S-C, Yeh J-H (2016) Can teaching hospitals use serial formative OSCEs to improve student performance? BMC Res Notes 9:464
Van De Ridder JMM, Stokking KM, McGaghie WC, Ten Cate OTJ (2008) What is feedback in clinical education? Med Educ 42:189–197. https://doi.org/10.1111/j.1365-2923.2007.02973.x
Roa Romero Y, Tame H, Holzhausen Y, Petzold M, Wyszynski J-V, Peters H et al (2021) Design and usability testing of an in-house developed performance feedback tool for medical students. BMC Med Educ 21:354. https://doi.org/10.1186/s12909-021-02788-4
Shute VJ (2008) Focus on formative feedback. Rev Educ Res 78:153–189
Morris R, Perry T, Wardle L (2021) Formative assessment and feedback for learning in higher education: a systematic review. Rev Educ 9:e3292. https://doi.org/10.1002/rev3.3292
Kulasegaram K, Rangachari PK (2018) Beyond “formative”: assessments to enrich student learning. Adv Physiol Educ 42:5–14. https://doi.org/10.1152/advan.00122.2017
Ramani S, Könings KD, Ginsburg S, van der Vleuten CPM (2019) Feedback redefined: principles and practice. J Gen Intern Med 34:744–749. https://doi.org/10.1007/s11606-019-04874-2
Kogan JR, Conforti LN, Bernabeo EC, Durning SJ, Hauer KE, Holmboe ES (2012) Faculty staff perceptions of feedback to residents after direct observation of clinical skills. Med Educ 46:201–215. https://doi.org/10.1111/j.1365-2923.2011.04137.x
Wong WYA, Roberts C, Thistlethwaite J (2020) Impact of structured feedback on examiner judgements in Objective Structured Clinical Examinations (OSCEs) using generalisability theory. Health Prof Educ 6:271–281. https://doi.org/10.1016/j.hpe.2020.02.005
Rebel A, Hester DL, DiLorenzo A, McEvoy MD, Schell RM (2018) Beyond the “E” in OSCE. Anesth Analg 127:1092–1096
Alkhateeb NE, Al-Dabbagh A, Ibrahim M, Al-Tawil NG (2019) Effect of a formative objective structured clinical examination on the clinical performance of undergraduate medical students in a summative examination: a randomized controlled trial. Indian Pediatr 56:745–748
Chisnall B, Vince T, Hall S, Tribe R (2015) Evaluation of outcomes of a formative objective structured clinical examination for second-year UK medical students. Int J Med Educ 6:76–83. https://doi.org/10.5116/ijme.5572.a534
Bandiera O, Larcinese V, Rasul I (2015) Blissful ignorance? A natural experiment on the effect of feedback on students’ performance. Labour Econ 34:13–25. https://doi.org/10.1016/j.labeco.2015.02.002
Ngim CF, Fullerton PD, Ratnasingam V, Arasoo VJT, Dominic NA, Niap CPS et al (2021) Feedback after OSCE: a comparison of face to face versus an enhanced written feedback. BMC Med Educ 21:180. https://doi.org/10.1186/s12909-021-02585-z
Harrison CJ, Molyneux AJ, Blackwell S, Wass VJ (2015) How we give personalised audio feedback after summative OSCEs. Med Teach 37:323–326
Wardman MJ, Yorke VC, Hallam JL (2018) Evaluation of a multi-methods approach to the collection and dissemination of feedback on OSCE performance in dental education. Eur J Dent Educ 22:e203–e211. https://doi.org/10.1111/eje.12273
Schwill S, Fahrbach-Veeser J, Moeltner A, Eicher C, Kurczyk S, Pfisterer D et al (2020) Peers as OSCE assessors for junior medical students – a review of routine use: a mixed methods study. BMC Med Educ 20:17. https://doi.org/10.1186/s12909-019-1898-y
Schut S, Heeneman S, Bierer B, Driessen E, van Tartwijk J, van der Vleuten C (2020) Between trust and control: teachers’ assessment conceptualisations within programmatic assessment. Med Educ 54:528–537. https://doi.org/10.1111/medu.14075
Schut S, Driessen E, van Tartwijk J, van der Vleuten C, Heeneman S (2018) Stakes in the eye of the beholder: an international study of learners’ perceptions within programmatic assessment. Med Educ 52:654–663. https://doi.org/10.1111/medu.13532
Bok HGJ, Teunissen PW, Favier RP, Rietbroek NJ et al (2013) Programmatic assessment of competency-based workplace learning: when theory meets practice. BMC Med Educ 13:123. https://doi.org/10.1186/1472-6920-13-123
Ethic Approval
This study obtained approval from the Commission of Ethics Faculty of Medicine Unika Atma Jaya Jakarta No.33/03/KEP-FKIKUAJ/2022.
Competing Interest
The authors declare that there are no competing interests related to the study.
Authors' Contribution
Komang Ardi Wahyuningsih—developing research proposal, collecting data, data analysis, and publication of the manuscript.
Nawanto Agung Prastowo—collecting data, and publication of the manuscript.
Veronica Dwi Jani Juliawati—collecting data, and publication manuscript.
Christian Ardianto—developing research proposal, collecting data, data analysis, and publication of the manuscript.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Wahyuningsih, K.A., Prastowo, N.A., Juliawati, V.D.J., Ardianto, C. (2023). Formative Objective Structured Clinical Examination (OSCE) as a Learning Tool and Predictor of High-Stakes OSCE. In: Claramita, M., Soemantri, D., Hidayah, R.N., Findyartini, A., Samarasekera, D.D. (eds) Character Building and Competence Development in Medical and Health Professions Education. INA-MHPEC 2022. Springer Proceedings in Humanities and Social Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-99-4573-3_12
Download citation
DOI: https://doi.org/10.1007/978-981-99-4573-3_12
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-4572-6
Online ISBN: 978-981-99-4573-3
eBook Packages: MedicineMedicine (R0)