Abstract
Introduction
The implementation of programs of assessment based on Entrustable Professional Activities (EPAs) offers an opportunity for students to obtain unique data to guide their ongoing learning and development. Although authors have explored factors that contribute to trust-based decisions, learners’ use of assessors’ decisions about the level of supervision they need has not been fully investigated.
Methods
In this study, we conducted semi-structured interviews of clerkship students who participated in the first year of our EPA program to determine how they interpret and use supervision ratings provided in EPA assessments. Content analysis was performed using concept-driven and open coding.
Results
Nine interviews were completed. Twenty-two codes derived from previous work describing factors involved in trust decisions and 12 novel codes were applied to the interview text. Analyses revealed that students focus on written and verbal feedback from assessors more so than on supervision ratings. Axial coding revealed a temporal organization that categorized how students considered the data from EPA assessments. While factors before, during, and after an assessment affected students’ use of information, the relationship between the student and the assessor had impact throughout.
Conclusions
Although students reported varying use of the supervision ratings, their perspectives about how assessors and students interact and/or partner before, during, and after assessments provide insights into the importance of an educational alliance in making a program of assessment meaningful and acceptable to learners.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Entrustable Professional Activity (EPA) assessments provide data about learners’ readiness to perform workplace-based clinical tasks with a specific level of supervision [1, 2]. Since the introduction of EPAs as an approach to competency-based education, attention has been given to understanding what EPAs are, how they can be used to design curricula and assess learner performance, and how supervision/trust-based decision-making by supervisors in health professions education occurs [2,3,4,5,6,7]. But these factors are only part of the equation when considering the potential impact of entrustment as the foundation for assessment [8, 9]. To date, the exploration of medical students’ use of EPA supervision recommendations, i.e., entrustment decisions, has been limited [10]. Given the resources being used to implement programs of assessment using EPAs, it is critical to understand how learners use the data from EPA assessment [10,11,12,13,14]. This matters because if learners do not find the assessment process and the results of assessments to be credible, they will miss key opportunities to use data from assessments for learning and ongoing development [15, 16].
Since the introduction of EPAs, much has been written about trust and the factors that influence supervisors’ trust [5,6,7, 17]. ten Cate et al. summarized categories of factors related to the supervisor, the trainee, the supervisor-trainee relationship, the context, and the task that impact a supervisor’s ad hoc entrustment decisions [6]. Ad hoc decisions “are based on a mix of estimated trustworthiness of the trainee, estimated risk of the situation, urgency of the job to be done, and suitability of this task at this moment for this learner [6].” Exploration of residents’ perspectives about how EPAs and entrustment scales affect feedback and learning illuminate the tension for learners when assessment aims to achieve concurrent goals of promoting their development and granting autonomy for patient care [18,19,20]. This issue was underscored in another recent study in which residents noted that completion of assessments had become a “form-filling exercise” and suggested that EPA assessments “blurred the lines between formative and summative assessment [21].” The challenges of asking assessors to provide data to promote learners’ development and judge their performance require systematic efforts to structure opportunities for assessment and to help teachers’ clearly understand their role and the influence of interpersonal relationships on learners’ perceptions about the information being shared [22,23,24].
Given the tenets of programmatic assessment and assessment for learning, learners should be able to use and act upon information from assessments to advance their performance and development [25]. Recognizing the importance of considering students as stakeholders in systems of assessment, Ricci et al. investigated how students integrate feedback from assessment and use the information provided to advance their learning [26]. It is clear that to maximize the impact of assessment to support learning, evidenced-based feedback is required [15, 18]. Feedback and formative assessment are most influential if provided within an assessment system that promotes student agency and that includes meaningful assessments and support structures to assist learners in interpreting feedback [10, 23, 27, 28]. Feedback provided within a supportive program of assessment not only informs learners’ self-assessment but also supports decisions about granting learners’ progressive autonomy [10, 29,30,31]. While authors have described students’ beliefs about how feedback after an EPA assessment would prepare them to perform the task unsupervised and have highlighted the importance of students’ seeing assessment as an opportunity to obtain data to inform and validate their self-assessment, the use of entrustment as a framework for assessment requires “…a significant shift in medical student mindset” [10, 14, 32].
At our institution, the Core Entrustable Professional Activities for Entering Residency (Core EPAs) are used as a framework for teaching and clinical assessment across all 4 years of the curriculum [33, 34]. During their clinical clerkships, students request EPA assessments; asking an assessor to observe them during an authentic clinical encounter with a patient, partnering with the assessor to determine which EPA task(s) will be observed, and sending the assessment request via a web-enabled tool that can be completed “just-in-time” by the assessor. Clinical encounters are directly observed by ad hoc, discipline-specific assessors (faculty, residents, and fellows) or by Master Assessors (experienced faculty trained to perform assessments across clinical disciplines and contexts). Completion of an EPA assessment requires an assessor to provide a supervision rating in addition to verbal feedback and narrative comments about the student’s strengths and areas of development. Assessments are designed to be criterion-based using performance expectations for each task developed by program leaders to define the behaviors needed to perform the observed task with indirect supervision. The supervision rating is based on a modified scale for use in undergraduate medical education [35]. Ad hoc entrustment is delineated by the level of supervision the assessor recommends for the next time the student completes the task: from joint performance with a supervisor to performance with a supervisor nearby, available to double check key elements of the task [2, 6, 35]. The results are available to students as soon as the supervisor completes an assessment. In the first year of implementation, the following EPA tasks were assessed: history taking and physical examination skills, development and prioritization of differential diagnoses, documentation of patient encounters, and provision of an oral presentation [33]. In this study, we sought to better understand students’ use of entrustment decisions by asking the following research question: “How do clerkship students use supervision recommendations provided during EPA assessments based on observed clinical encounters?”.
Methods
We conducted semi-structured, one-on-one interviews with students who had completed EPA assessments on their clerkships during the first year of full implementation of the EPA program (end of February 2018 to February 2019). Initially, two investigators (EAW and EBB) conducted the interviews. Once trained in this interview technique, EAW conducted interviews one-on-one with participants. Interviews lasted between 30 and 45 min, and were conducted using an interview guide. The interviews began with questions exploring students’ use and understanding of the supervision recommendations provided in EPA assessments. Follow-up questions such as “How have these supervision recommendations impacted how you do your work on the clerkships?” “How consistent have recommendations from different supervisors been for you?” and “What do you think went into the decision making of the supervisor when they selected a level of supervision?” among others were investigated as the interview progressed.
Interviews were audio recorded, transcribed, and de-identified before coding. We performed directed content analysis sensitized by previous work categorizing the factors that influence trust-based decision-making [6, 36, 37]. Two authors (EAW and EBB) independently performed both concept-driven and open coding in a dual analytic process [38]. ten Cate et al.’s five categories of trust were comprised of 29 factors, and for 21 of these, the research team applied the additional descriptor of “self” (related to the student) and “other” for further clarity and meaning, allowing for a total of 50 concept-driven codes available for use in the analysis [6]. In the end, the authors iteratively identified and applied 34 total codes to the transcripts; 22 of the 50 concept-driven codes described above and 12 novel data-driven codes identified during the coding process. The authors discussed and reconciled differences in coding and refined the codebook as needed to make sense of the data; recruitment of participants was stopped once saturation was achieved. Using axial coding, the codes were then grouped into four categories through discussion by all of the authors. The categories organized the codes according to their temporal relationship to completion of an EPA assessment (before, during, after, throughout). Given the interdependence and complexities of many of the factors associated with the codes, placement within the temporal structure was assigned using the context described by the participants. Illustrative, representative quotes from the interviews associated with the codes are numbered based on the order in which the interviews occurred. To enhance trustworthiness, we performed member checking; participants reviewed codes and categories to ensure credibility of the analyses [39].
It is important to note that EAW was a clerkship student during the pilot phase of the EPA program (June 2017-February 2018) and thus had completed EPA assessments prior to the time of this study. This experience gave this author a unique perspective on the program and required that they consider their experience with EPA assessments while conducting interviews and during analysis of the data. MEG and EBB are on the EPA Leadership Team that oversaw the development and implementation of the EPA teaching and assessment program, and thus had to remain aware throughout the study and the writing of this manuscript of personal biases and their intimate knowledge of the intended and implemented program. This research was reviewed and determined to be exempt by the University of Virginia Institutional Review Board (IRB 2018–0309).
Results
Nine medical students participated in the study. Each interviewee was more than halfway through their required clerkships in the inaugural year of the EPA program, and had completed approximately 25 EPA assessments prior to participation in the study. Table 1 delineates the 22 codes used in the analysis that were derived from ten Cate’s five categories of factors that impact ad hoc entrustment decisions, and the 12 novel codes identified during analysis. As noted by ten Cate, the language for the labels was borrowed from Kennedy et al. [6, 40].
Participant interviews provided insight about factors that influence how students interpret and assign value to the information provided in an EPA assessment. Learners described varying degrees and individual ways of using the supervision recommendations provided in EPA assessments. Students who did look at the supervision rating melded their interpretation of this information with the written and oral feedback they receive as part of an EPA assessment to construct their overall understanding of the data. One student stated, “…I think of what they said in the feedback versus what they said about the level of autonomy [from the rating scale].” (006) Another student described the impact on their expectations for themselves, stating “I don’t think it’s their recommendation that encourages me to think autonomously necessarily. It’s rather just a natural progression of learning. I don’t think the recommendations from the supervisors specifically are necessarily playing a role in that, but I have noticed myself throughout the EPA’s… I suppose expect more of myself. I’m like, ‘OK, I can perform on a different level.’” (002) Students reported different factors affected their use of the supervision rating: forgetting to look at the rating after the assessment, feeling that supervisors give more thought to the feedback than to the rating, and their beliefs that the rating is not as useful to their clinical development as the written/oral feedback they receive, or as their own self-assessment of their abilities. Students explained “To be honest, with most of the feedback on as far as the level goes, I open it up, I look at it and go, ‘OK, that one again. Neat.’ Then that’s about the extent of my thought process on it. It doesn’t really impact what I think.” (008) and, “It’s dependent on the task. For the physical exam, when (sic) those EPAs, I was definitely more understanding of the supervisor needing to be in the room or anything. I was always much more open. When it was a history, writing a note or something like that, I felt like maybe there was not… I should be able to do that and not have to be so… at least at this point in third year. I feel like I should be able to do those things without constantly being supervised.” (003).
Factors Before an EPA Assessment That Impact Students’ Use of Information
Students described several considerations prior to an EPA assessment that affected their interpretation of the value of the information provided. Factors pre-assessment include factors related to the supervisor: supervisor engagement in the EPA system and learners’ perception of the supervisor’s experience with evaluation, and factors related to the student: sense of responsibility, perception of the EPA program, habits of self-evaluation, perception as a burden to their supervisor, and viewing the EPA program as an obligation versus an opportunity. Participants described the importance of how engaged they believed the supervisor to be in the EPA system prior to asking the supervisor for observation. One student stated, “If people are very glad to do it, if they offer, or volunteer, etc., to do EPAs or if they have that EPA little sticker on their [ID badge], I find that makes me more inclined to approach them about something like this in the future and do it.” (005) Learner perception of supervisor experience with evaluation also influenced the way in which students interpreted the information provided in an EPA assessment. “I think that [experience] does kind of [have] a role in my evaluation of the credibility. I think that like established attendings… it really depends on the person but I think that they sometimes have a better feel than residents for, or even… higher-level residents versus lower-level residents, they, the upper levels, will know more about where we are in our level of training and what's expected of us at this point of training.” (001) Learners described their sense of responsibility as important wondering, “Am I responsible enough to be doing this on my own without any harm to patients or anything like that?” (005) Other students described how one’s understanding and perception of the EPA program might color the use of the information received before even going into the assessment. “My understanding is that the purpose of the EPA is eventually you would want to get to the point where you can do it without supervision. The point of that scale is measuring where along that path you are.” (004) and “I’m guessing it’s to reflect on how we performed whatever task we did. My guess is that, collectively, it’s supposed to show us some type of progress.” (003) Habits of self-assessment may also impact students use of assessment information: “Maybe if I was doing more self-reflection, seeing my own thoughts, and giving myself something on this [supervision] scale immediately after, then seeing what they did and being like, ‘OK, well, I trust that because I thought maybe I could’ve done this better right after and they are also saying that I could’ve done this better right after.’ Instead, it’s just like I forget, it all blends together.” (003) If students feel that they are a burden to attendings or residents, it will impact whether or not they will seek an EPA assessment and this can impact their acceptance or use of the information provided: “And then the attending, how willing the attending was to, or resident, to do the EPA and what their attitude was when I asked and like, if they were enthusiastic about like, ‘[Oh], yeah for sure send it to me’ or if there were like, ‘[Yeah]’… I think it’s more dependent on other factors. Either the feedback or their enthusiasm.” (001) Another student added, “You only get feedback from people who have the time to basically fill out something in two days too. That can also vary a lot.” (008) When students viewed EPA assessments as an obligation that needed to be completed instead of an opportunity for personal development, however, they felt that the information provided was not useful. “That’s not the case, but EPAs, because they seem like a check box, it’s like, ‘OK, I’ve checked that off…’.” (009).
Factors During an EPA Assessment That Impact Students’ Use of Information
Students included elements related to the context and the interactions/relationship with a supervisor in their description of factors that impacted the value of information provided during assessment. Specifically, they described supervisor skill and experience with EPA assessments, the student’s familiarity with the specialty, patient complexity, and if an observation changed typical behavior for either the supervisor or the learner. Supervisor skill and experience with the EPA program weighed heavily in the students’ critique of the assessment data. If an assessor comments that they do not really know what the supervision scale is or EPAs are, student trust in and use of the information diminishes, while if the opposite is true, student use of the assessment findings is more likely. A student explained, “Because some of the attendings, I’m sure, have never done one [an EPA assessment]. It’s not like I would trust them anymore.” (004) Additionally, level of familiarity with clinical specialties impacted student experience with EPAs: “When I was on, let’s say, neurosurgery, I was not very familiar with a lot of what was going on there just in terms of the science itself. I feel like the EPA tended to reflect that… I would try to target areas of weakness either by that [assessor’s] recommendation or more usually self-identified through the assessment overall. Then I would just focus on that.” (005) Another student described the value of being able to interact with an attending during an EPA assessment, “The EPA, if you do it with them, then I feel like a lot of what they draw on in terms of feedback for you stems from that, because they don’t really see you much aside from that.” (009) Students described how decisions are affected by the amount of time they spend with supervisors: “Residents and attendings that I have spent more time with tend to tell me I need less supervision. I think that’s less me doing better, and more, they’re more familiar with me as a student.” (007) Regarding patient complexity this student further explained, “In a rotation, it’s when things are less busy because the patients are less complex, I think I do better. It’s because I’m having a less complex interaction.” Another student noted however that being observed for an EPA assessment during a patient encounter could alter their behavior: “Maybe being more conscious of for example, now the physical exam is very technical, and I felt myself in the EPA be a little bit more slow and cautious, and I think that carried on into practice, too. It's also repetition, the more we do it the more comfortable we’re getting. It slows me down and forces me to think about… I would say I feel like it allows me to be more cautious and think about my weaknesses more.” (002).
Factors After an EPA Assessment That Impact Students’ Use of Information
Participants reported that volume and thoughtfulness of feedback provided in an EPA assessment influenced their interpretation of the value of these assessments. “Sometimes I get feedback and it’s a few words. That overall makes me not really pay attention to that EPA because I feel like they didn't take that much time to do it.” (007) “Whether it’s thoughtfully written, whether they’re just saying, ‘Oh, good job, continue doing what you’re doing,’ or whether it’s negative or positive but have a specific example of what I am doing well or poorly definitely affects my trust in their recommendations.” (002) Learners’ perception of supervisor conscientiousness and reliability also had an effect: “There’s some consistency with the master assessors. I would say that they tend to be fairly consistent. I would say there’s consistency with people who have been invested in EPAs. However, if someone is not, then all that goes out the window.” (008) Additional elements post-observation that affected learners’ use of information were congruence between supervision recommendation and verbal feedback, congruence between learner self-assessment and supervision recommendation, and learner self-confidence in performing the task. “It just makes you want to get better at it. At the same time, it’s like you know you can’t get better at it without doing it more often. It’s not like the next time I did it, I felt that I should be at the level without supervision.” (004).
Factors Throughout an EPA Assessment That Impact Students’ Use of Information
Students’ perceptions of the learner-supervisor relationship was a pervasive theme, as illustrated in the following quotes: “I feel like if I was to see one of the master assessors again, just the fact that now I have a relationship with them, I’d be more likely to ask them for feedback if they were to see me again or go for the same master assessor at a later time, since now I’d have that relationship with them.” (009) Another student described how the observed interaction between a learner and a supervisor supports a commitment to the student’s learning and development, stating, “Otherwise, you’re just put into a shadowing role a lot in fast-paced clinics, ORs in general. You might just be told to watch and make it an observer sport. When people are actually forced to watch you take an H&P, for instance, they think a lot more critically about your performance. I always felt like there was more of a mentorship when that happened. I would say quite positively.” (005) In general, participants felt that meaningful engagement between the learner and the supervisor, be it during a single encounter or across multiple encounters, made feedback more valuable and more likely to be implemented at all points in the EPA assessment process.
Discussion
In this study, we sought to explore how students use the entrustment decision provided in ad hoc EPA assessments. Students’ description of their experience suggested that most learners primarily focused on the feedback, verbal and written, provided during EPA assessments and factors related to the supervisor and the learner impacted the value they placed on the information they received. Content analysis of semi-structured interviews of medical students participating in an EPA-based program of assessment informed the creation of a model to illustrate how factors identified by learners interact in an iterative fashion (Fig. 1) to explain how they interpret and assign value to the information provided in an EPA assessment. As the figure illustrates, factors nest temporally, before, during, and after EPA assessments but also interact to influence student perceptions about future EPA assessments. Students underscored the relationship between the learner and the supervisor as fundamental and of critical importance to a learner’s use of information from an EPA assessment; reinforcing the importance of an educational alliance between teachers and learners to not only advance learning but also to lay the foundation for trust-based decision-making [9, 10, 23, 41, 42].
Acceptability of an assessment program requires that stakeholders believe that the process and results of the assessments used in the program are credible [16]. In our study, participants focused on the feedback provided as a component of ad hoc EPA assessments suggesting that learners prioritize this information above the supervision recommendation/entrustment decision. Feedback provided in formative assessments has a “catalytic” effect when this information is used to enhance learning [18, 43]. Equally important, a “good” assessment also has an educational effect and is motivating to learners [18, 43]. In the case of EPA assessments, the use of criterion-based performance expectations to translate observed behaviors into supervision recommendations set the stage for students’ interpretation of feedback in relation to what is needed to perform the task with indirect supervision [14]. An entrustment decision indicating the level of supervision a learner needs to perform a task provides data to ground their self-assessment, discern limitations, and inform when they should ask for help [4, 8, 10, 29, 40]. Students in our study however did not universally report using the entrustment decision in these ways.
Self-determination theory suggests that intrinsic motivation is driven by a learner’s sense of autonomy, competence, and relatedness [44]. As noted, students’ perceptions about the teacher-learner relationship, i.e., their sense of relatedness, were a primary factor that determined how they viewed the value of the assessment [23, 41, 42]. EPA assessments include verbal and narrative comments that provide information about a learner’s strengths and areas in need of development. This data in conjunction with the supervision recommendation provides information about a student’s emerging competence. Learners are motivated to perform tasks when they feel competent to do so [45]. Ad hoc assessments in our program are intended to be formative; but as seen in our results, various factors before, during, after, and throughout assessment influence how and if students see this information as a tool for learning.
The entrustment decision provided in an EPA assessment is directly related to autonomy. In our program, assessors provide a recommendation about the level of supervision a learner needs the next time they perform the task in a similar context [2, 34]. Using data from direct observation of patient encounters, assessors provide data about a learner’s abilities as it relates to readiness for graduated autonomy [2, 31, 41, 45]. Our results suggest that in order to fully engage students in an EPA-based program of assessment, additional efforts are needed to strengthen their understanding about how to use and interpret this information as data about their readiness for additional responsibility in the care of patients [2, 4, 32]. Likewise, efforts to engage and support faculty in the EPA program should continue to ensure they provide accurate assessments and supervision ratings and feedback that are aligned so that the data from EPA assessments are meaningful for students and can be used to support their growth as professionals [10, 34, 45,46,47,48,49].
The small number of participants interviewed in this study may limit the transferability of the findings [39]. Despite the small cohort, we did reach saturation in the responses from participants. The viewpoints of students willing to participate in the study may differ from those of students who did not volunteer to participate. Additionally, study participants were in the inaugural class that completed EPA assessments in the core clerkships. With such a significant change in the curriculum and the culture of assessment, logistic challenges likely impacted the experience of participants and thus their experiences of the system as a whole [22, 34]. Changing the mindset of stakeholders, learners, and assessors, to promote their engagement in a new system of assessment also takes time [32, 34, 50]. Many of these challenges may fade as the program matures. The authors each have a unique perspective influencing the study as well. One of the authors (EAW) was a student who participated in the pilot phase of the EPA program, while MEG and EBB were part of the development team and continue to guide and administer the EPA program. As noted, member checking was done to enhance the trustworthiness of the analysis [39].
Illustrated by the students’ reporting that they more consistently value the verbal and written feedback over the supervision recommendation, the introduction of an EPA-based framework for assessment requires intentional efforts to allow learners to view all of the elements of the assessment data as a tool for their ongoing learning and development [4, 10, 32]. To value the information provided, learners must understand the standards being applied to assess their performance and be able to use this information to discern the need for supervision when they engage in clinical tasks in the future [8, 32]. Ad hoc supervision decisions should motivate learners as they continue to practice the task across various contexts before a summative entrustment decision is made [4]. In our program, we incorporate professional development for students to orient them to workplace learning, trust-based decision making, and the goals of the EPA program. Students are introduced to the performance expectations for each EPA task and to the criteria used by supervisors to make supervision recommendations. The training sessions also introduce students to strategies and tools that they can use to analyze and reflect on the data they receive. Importantly, students learn that EPA assessments do not contribute to the course evaluations and grades. They also learn about the Entrustment Committee, a committee charged with aggregating data from ad hoc assessments to make a summative entrustment decision, i.e., to determine a student’s readiness to engage in an advanced clinical elective in the post-clerkship phase of the curriculum [49]. The findings from this study have informed our ongoing efforts to enhance the EPA program and the professional development training for all stakeholders. Specific elements that have been enhanced or reinforced align with several recommendations recently outlined by Geraghty et al.: Students are now introduced to the EPA program early in the curriculum and complete assessments as a part of their Foundations of Clinical Medicine course in the preclerkship phase; shared responsibility for the assessment process is highlighted in training sessions for assessors and for students by providing specific examples of the ways that students and assessors can partner to identify opportunities for assessment as a part of the workflow related to patient care; lastly, the R2C2 framework for feedback is promoted as a tool for assessors to structure immediate feedback and to engage learners in a reflective dialogue about their performance even though the ad hoc assessments are done in settings in which the supervisors do not have longitudinal relationships with the students [10, 25, 32, 34, 51]. Students in our program engage with Faculty Coaches throughout the educational program. Coaches help students interpret the data they receive from assessments of their clinical performance and partner with students to co-create learning plans using this information [34, 52].
Conclusions
The results of this study suggest the need for further work to elucidate how to promote the educational and catalytic effect of EPA assessments and further acceptability of the process and results for all stakeholders [43]. A fundamental aim of the EPA program is to promote stakeholders’ trust in the process used for, and the data provided by, EPA assessments. When all stakeholders value the information, teachers and learners are partners and can engage in bi-directional dialogue to determine when a student is ready to take on additional responsibilities in patient care [2, 42, 53, 54].
Data Availability
Additional excerpts analyzed during the study can be made available from the corresponding author upon reasonable request.
References
Ten Cate O. Entrustability of professional activities and competency-based training. Med Educ. 2005;39:1176–7. https://doi.org/10.1111/j.1365-2929.2005.02341.x.
Ten Cate O, Schwartz A, Chen HC. Assessing trainees and making entrustment decisions: on the nature and use of entrustment-supervision scales. Acad Med. 2020;95:1662–9. https://doi.org/10.1097/ACM.0000000000003427.
Ten Cate O, Chen HC, Hoff RG, Peters H, Bok H, Van der Schaaf M. Curriculum development for the workplace using Entrustable Professional Activities (EPAs): AMEE Guide No. 99. Med Teach. 2015;37:983–1002. https://doi.org/10.3109/0142159X.2015.1060308.
Peters H, Holzhausen Y, Boscardin C, Ten Cate O, Chen HC. Twelve tips for the implementation of EPAs for assessment and entrustment decisions. Med Teach. 2017;39:802–7. https://doi.org/10.1080/0142159X.2017.1331031.
Hauer KE, Ten Cate O, Boscardin C, Irby DM, Iobst W, O’Sullivan PS. Understanding trust as an essential element of trainee supervision and learning in the workplace. Adv Health Sci Educ Theory Pract. 2014;19:435–56. https://doi.org/10.1007/s10459-013-9474-4.
Ten Cate O, Hart D, Ankel F, Busari J, Englander R, Glasgow N, Holmboe E, Iobst W, Lovell E, Snell LS, Touchie C, Van Melle E, Wycliffe-Jones K. On behalf of the International Competency-Based Medical Education Collaborators. Entrustment decision making in clinical training. Acad Med. 2016;91:191–8. https://doi.org/10.1097/ACM.0000000000001044.
Holzhausen Y, Maaz A, Cianciolo AT, Ten Cate O, Peters H. Applying occupational and organizational psychology theory to entrustment decision-making about trainees in health care: a conceptual model. Perspect Med Educ. 2017;6:119–26. https://doi.org/10.1007/s40037-017-0336-2.
Brown DR, Warren JB, Hyderi A, Drusin RE, Moeller J, Rosenfeld M, Orlander PR, Yingling S, Stephanie Call, Terhune K, Bull J, Englander R, Wagner DP. Finding a path to entrustment in undergraduate medical education: a progress report from the AAMC Core Entrustable Professional Activities for Entering Residency entrustment concept group. Acad Med. 2017;92:774–9. https://doi.org/10.1097/ACM.0000000000001544.
Dolan BM, Arnold J, Green MM. Establishing trust when assessing learners: barriers and opportunities. Acad Med. 2019;94:1851–3. https://doi.org/10.1097/ACM.0000000000002982.
Caro Monroig AM, Chen HC, Carraccio C, Richards BF, Ten Cate O, Balmer DF. EPAC Study Group. Medical students’ perspectives on entrustment decision-making in an EPA assessment framework: a secondary data analysis. Acad Med. 2020. Online Ahead of Print. https://doi.org/10.1097/ACM.0000000000003858.
Lucey CR, Thibault GE, Ten Cate O. Competency-based, time-variable education in the health professions: crossroads. Acad Med. 2018;93:S1–5. https://doi.org/10.1097/ACM.0000000000002080.
Van Melle E, Frank JR, Holmboe ES, Dagnone D, Stockley D, Sherbino J. On behalf of the International Competency-based Medical Education Collaborators. A core components framework for evaluating implementation of competency-based medical education programs. Acad Med. 2019;94:1002–9. https://doi.org/10.1097/ACM.0000000000002743.
Karp NC, Hauer KE, Sheu L. Trusted to learn: a qualitative study of clerkship students’ perspectives on trust in the clinical learning environment. J Gen Intern Med. 2019;34:662–8. https://doi.org/10.1007/s11606-019-04883-1.
Duijn CCMA, Welink LS, Mandoki M, Ten Cate OTJ, Kremer WDJ, Bok HGJ. Am I ready for it? Students’ perceptions of meaningful feedback on entrustable professional activities. Perspect Med Educ. 2017;6:256–64. https://doi.org/10.1007/s40037-017-0361-1.
Watling C, Driessen E, van der Vleuten CPM, Lingard L. Learning from clinical work: the roles of learning cues and credibility judgements. Med Educ. 2012;46:192–200. https://doi.org/10.1111/j.1365-2923.2011.04126.x.
Norcini J, Anderson MB, Bollela V, Burch V, Costa MJ, Duvivier R, Hays R, Mackay MFP, Roberts T, Swanson D. 2018 Consensus framework for good assessment. Med Teach. 2018;40:1102–9. https://doi.org/10.1080/0142159X.2018.1500016.
Duijn CCMA, Welink LS, Bok HGJ, Ten Cate OTJ. When to trust our learners? Clinical teachers’ perceptions of decision variables in the entrustment process. Perspect Med Educ. 2018;7:192–9. https://doi.org/10.1007/s40037-018-0430-0.
Watling CJ, Ginsburg S. Assessment, feedback and the alchemy of learning. Med Educ. 2019;53:76–85. https://doi.org/10.1111/medu.13645.
Martin L, Sibbald M, Vegas DB, Russell D, Govaerts M. The impact of entrustment assessments on feedback and learning: trainee perspectives. Med Educ. 2020;54:328–36. https://doi.org/10.1111/medu.14047.
Dudek N, Gofton W, Rekman J, McDougall A. Faculty and resident perspectives on using entrustment anchors for workplace-based assessment. J Grad Med Educ. 2019;11:287–94. https://doi.org/10.4300/JGME-D-18-01003.1.
Day LB, Miles A, Ginsburg S, Melvin L. Resident perceptions of assessment and feedback in competency-based medical education: a focus group study of one internal medicine residency program. Acad Med. 2020;95:1712–7. https://doi.org/10.1097/ACM.0000000000003315.
Pelgrim EA, Kramer AW, Mokkink HG, Van Der Vleuten CP. The process of feedback in workplace-based assessment: organisation, delivery, continuity. Med Educ. 2012;46:604–12. https://doi.org/10.1111/j.1365-2923.2012.04266.x.
Schut S, Driessen E, van Tartwijk J, van der Vleuten C, Heeneman S. Stakes in the eye of the beholder: an international study of learners’ perceptions within programmatic assessment. Med Educ. 2018;52:654–63. https://doi.org/10.1111/medu.13532.
Brand PLP, Jaarsma ADC, van der Vleuten CPM. Driving lesson or driving test?: a metaphor to help faculty separate feedback from assessment. Perspect Med Educ. 2021;10:50–6. https://doi.org/10.1007/s40037-020-00617-w.doi:10.1007/s40037-020-00617-w.
Schuwirth LWT, Van der Vleuten CPM. Programmatic assessment: from assessment of learning to assessment for learning. Med Teach. 2011;33:478–85. https://doi.org/10.3109/0142159X.2011.565828.
Ricci M, St-Onge C, Xiao J, Young M. Students as stakeholders in assessment: how students perceive the value of an assessment. Perspect Med Educ. 2018;7:352–61. https://doi.org/10.1007/s40037-018-0480-3.
Konopasek L, Norcini J, Krupat E. Focusing on the formative: building an assessment system aimed at student growth and development. Acad Med. 2016;91:1492–7. https://doi.org/10.1097/ACM.0000000000001171.
Harrison CJ, Könings KD, Dannefer EF, Schuwirth LWT, Wass V, van der Vleuten CPM. Factors influencing students’ receptivity to formative feedback emerging from different assessment cultures. Perspect Med Educ. 2016;5:276–84. https://doi.org/10.1007/s40037-016-0297-x.
Sargeant J, Eva KW, Armson H, Chesluk B, Dornan T, Holmboe E, Lockyer JM, Loney E, Mann KV, van der Vleuten CPM. Features of assessment learners use to make informed self-assessments of clinical performance. Med Educ. 2011;45:636–47. https://doi.org/10.1111/j.1365-2923.2010.03888.x.
Lefroy J, Watling C, Teunissen PW, Brand P. Guidelines: the do’s, don’ts and don’t knows of feedback for clinical education. Perspect Med Educ. 2015;4:284–99. https://doi.org/10.1007/s40037-015-0231-7.
Yardley S, Westerman M, Bartlett M, Walton JM, Smith J, Peile E. The do’s, don’t and don’t knows of supporting transition to more independent practice. Perspect Med Educ. 2018;7:8–22. https://doi.org/10.1007/s40037-018-0403-3.
Geraghty JR, Ocampo RG, Liang S, Ayala KE, Hiltz K, McKissack H, Hyderi A, Ryan MS. Medical students’ views on implementing the Core EPAs: recommendations from student leaders at the Core EPAs pilot institutions. Acad Med. 2021;96:193–8. https://doi.org/10.1097/ACM.0000000000003793.
Englander R, Flynn T, Call S, Carraccio C, Cleary L, Fulton TB, Garrity MJ, Lieberman SA, Lindeman B, Lypson ML, Minter RM, Rosenfield J, Thomas J, Wilson MC, Aschenbrener CA. Toward defining the foundation of the MD degree: core entrustable professional activities for entering residency. Acad Med. 2016;91:1352–8. https://doi.org/10.1097/ACM.0000000000001204.
Bray MJ, Bradley EB, Martindale JR, Gusic ME. Implementing systematic faculty development to support an EPA-based program of assessment: strategies, outcomes and lessons learned. Teach Learn Med. Online Ahead of Print. 2020:1–31. https://doi.org/10.1080/10401334.2020.1857256.
Chen HC, van den Broek WES, ten Cate O. The case for use of entrustable professional activities in undergraduate medical education. Acad Med. 2015;90:431–6. https://doi.org/10.1097/ACM.0000000000000586.
Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15:1277–88. https://doi.org/10.1177/1049732305276687.
Charmaz K. Grounded theory: Objectivist and constructivist methods. In: Denzin NK, Lincoln YS, editors. Strategies for qualitative inquiry. 2nd ed. Thousand Oaks: Sage; 2003. p. 249–91.
Kennedy TJT, Lingard L. Making sense of grounded theory in medical education. Med Educ. 2006;40:101–8. https://doi.org/10.1111/j.1365-2929.2005.02378.x.
Korstjens I, Moser A. Series: Practical guidance to qualitative research. Part 4: trustworthiness and publishing. Eur J of Gen Pract. 2018;24:120–4. https://doi.org/10.1080/13814788.2017.1375092.
Kennedy TJT, Regehr G, Baker GR, Lingard L. Point-of-care assessment of medical trainee competence for independent clinical work. Acad Med. 2008;83:S89-92. https://doi.org/10.1097/ACM.0b013e318183c8b7.
Telio S, Regehr G, Ajjawi R. Feedback and the educational alliance: examining credibility judgements and their consequences. Med Educ. 2016;50:933–42. https://doi.org/10.1111/medu.13063.
Schut S, van Tartwijk J, Driessen E, van der Vleuten C, Heeneman S. Understanding the influence of teacher-learner relationships on learners’ assessment perception. Adv Health Sci Educ Theory Pract. 2020;25:441–56. https://doi.org/10.1007/s10459-019-09935-z.
Norcini J, Anderson MB, Bollela V, Burch V, Costa MJ, Duvivier R, Galbraith R, Hays R, Kent A, Perrott V, Roberts T. Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011;33:206–14. https://doi.org/10.3109/0142159X.2011.551559.
Ten Cate Th J, Rashmi A, Kusurkar RA, Williams GC. How self-determination theory can assist our understanding of the teaching and learning processes in medical education. AMEE guide No. 59. Med Teach. 2011;33:961–73. https://doi.org/10.3109/0142159X.2011.595435.
Schumacher DJ, Englander R, Carraccio C. Developing the master learner: applying learning theory to the learner, the teacher, and the learning environment. Acad Med. 2013;88:1635–45. https://doi.org/10.1097/ACM.0b013e3182a6e8f8.
Ramani S, Könings KD, Ginsburg S, van der Vleuten CPM. Twelve tips to promote a feedback culture with a growth mind-set: swinging the feedback pendulum from recipes to relationships. Med Teach. 2019;41:625–31. https://doi.org/10.1080/0142159X.2018.1432850.
Hoffman BD. Using self-determination theory to improve residency training: learning to make omelets without breaking eggs. Acad Med. 2015;90:408–10. https://doi.org/10.1097/ACM.0000000000000523.
Favreau MA, Tewksbury L, Lupi C, Cutrer WB, Jokela JA, Yarris LM. AAMC Core Entrustable Professional Activities for Entering Residency Faculty Development Concept Group. Constructing a shared mental model for faculty development for the Core Entrustable Professional Activities for Entering Residency. Acad Med. 2017;92:759–64. https://doi.org/10.1097/ACM.0000000000001511.
Keeley MG, Gusic ME, Morgan HK, Aagaard EM, Santen SA. Moving toward summative competency assessment to individualize the postclerkship phase. Acad Med. 2019;94:1858–64. https://doi.org/10.1097/ACM.0000000000002830.
Lupi CS, Ownby AR, Jokela JA, Cutrer WB, Thompson-Busch AK, Catallozzi M, Noble JM, Amiel JM. Faculty development revisited: a systems-based view of stakeholder development to meet the demands of entrustable professional activity implementation. Acad Med. 2018;93:1472–9. https://doi.org/10.1097/ACM.0000000000002297.
Lockyer J, Armson H, Könings KD, Lee-Krueger RC, des Ordons AR, Ramani S, Trier J, Zetkulic MG, Sargeant J. In-the-moment feedback and coaching: improving R2C2 for a new context. J Grad Med Educ. 2020;12:27–35. https://doi.org/10.4300/JGME-D-19-00508.1.
Parsons AS, Kon RH, Plews-Ogan M, Gusic ME. You can have both: coaching to promote clinical competency and professional identity formation. Perspect Med Educ. 2021;10:57–63. https://doi.org/10.1007/s40037-020-00612-1.
Abruzzo D, Sklar DP, McMahon GT. Improving trust between learners and teachers in medicine. Acad Med. 2019;94:147–50. https://doi.org/10.1097/ACM.0000000000002514.
Hendren EM, Kumagai AK. A matter of trust. Acad Med. 2019;94:1270–2. https://doi.org/10.1097/ACM.0000000000002846.
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed by Elizabeth Bradley, PhD, and Eric Waselewski, MD. The first draft of the manuscript was written by Elizabeth Bradley, PhD, and Maryellen Gusic, MD, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Bradley, E.B., Waselewski, E.A. & Gusic, M.E. How Do Clerkship Students Use EPA Data? Illuminating Students’ Perspectives as Partners in Programs of Assessment. Med.Sci.Educ. 31, 1419–1428 (2021). https://doi.org/10.1007/s40670-021-01327-6
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40670-021-01327-6