Keywords

FormalPara Simulation Pearls
  • There are increasing calls for healthcare professionals to fulfill their social contract with society and ensure competence of all professionals in order to maintain the privilege of self-regulation. Competency-based education (CBE) offers promise, as an outcome-based model of education, to help address the gap between actual and desired performance.

  • Simulation-based education (SBE) curricula should be based on needs analysis. Prior to designing, clear goals should be defined to measure the success of the training program.

  • Specific learning objectives, instructional strategy, simulation technology, training environment, and debriefing models should be carefully selected based on the level of learner.

  • Challenges to CBE include defining learning objectives that are not excessively comprehensive, trainees focusing on milestones rather than achieving excellence, administrative logistics, instructor expertise and availability, and cost.

  • Optimal education will require a change in our current approach to assessment. Assessment needs to be programmatic and conceptualized as part of instructional design with a shift away from assessment of learning to assessment for learning.

  • SBE is increasingly used for high-stakes, summative purposes such as local program-based examinations, achieving certification, and demonstrating ongoing competence to maintain certification.

Introduction

In 1999 the Institute of Medicine’s landmark report “To Err is Human: Building a Safer Health System” highlighted that as many as 98,000 deaths are due to medical error [1]. In response, accrediting bodies, healthcare organizations, and medical educators across all disciplines have embraced SBE as one solution to improving what many believe was a root cause, namely poor communication and team functioning [2]. A burgeoning literature in SBE has demonstrated that simulation can improve knowledge, skills, and behaviors as well as result in some improvement in patient outcomes [37]. Despite the success and large uptake of SBE many programs are ad hoc with variable and inconsistent instruction, curricula, and evaluation of competency. In response, educators have turned their focus to developing comprehensive curricula for continuing professional development (CPD) and the use of mastery learning/CBE. This chapter will describe a model for curriculum development and the promotion of professional development through CBE . We conclude by reviewing barriers and challenges to CPD and CBE and explore future directions .

Curriculum Development in Competency-Based Education

Some key challenges to developing simulation curricula for CBE across the healthcare education continuum include the heterogeneity of learners, the variable experiences they bring, the feasibility of teaching, and the validity and reliability of assessing competencies through simulation within the professional environment. This section outlines a curriculum development process for CBE for all disciplines that follows the analysis, design, development, implementation, and evaluation (ADDIE) model of instructional design and incorporates concepts from Kern’s six-step approach to curriculum development for medical education and the Simulation Module for Assessment of Resident-Targeted Event Responses (SMARTER) approach of developing measurement tools for simulation-based training [810]. The process can be applied for curriculum development for any level of learner and unfolds in five phases including ADDIE (Fig. 14.1). We will describe these five phases and discuss the key considerations in each phase specific to CBE.

Fig. 14.1
figure 1

Curriculum development process/scenario design based on the analysis, design, development, implementation, and evaluation (ADDIE) Model of Instructional Design and the Simulation Module for Assessment of Resident-Targeted Event Responses (SMARTER) approach to creating measurement tools for simulation-based education [8, 10]. The first two steps of the process, (1) analysis and (2) design, occur sequentially. (3) Development, (4) implementation, and (5) evaluation may be cyclical. Revisions may continue after piloting as evaluation informs further development

Analysis

The process begins with a needs assessment which includes identifying the target audience and level(s) of expertise, defining boundaries to conducting training, and crystallizing the critical requirements to include in the educational initiative. The subject matter expert must understand the gap in knowledge, skills, and attitudes (KSA) to be addressed with simulation-based training as well as anticipated outcomes. For students or trainees, focus may be placed on learning outcomes including cognitive outcomes, skill-based outcomes, and changes in attitudes. Additionally, cost of training must be justified with specific outcomes included, such as patient safety outcomes and/or financial components [11].

Needs assessments can be accomplished through literature review, review of institutional data surrounding patient safety events or quality improvement activities , direct observation in the actual clinical or simulated environment, written surveys, in-person or telephone interviews, and/or focus group discussions. The latter three strategies can be completed with learners, their colleagues, their educators or managers, and patients. The rationale for including many perspectives in the needs assessment is that competence in the professional healthcare environment encompasses not only an individual’s knowledge or ability to perform a skill but rather one’s ability to apply this knowledge or skill among interprofessional teams. Perspectives from colleagues and patients will further inform the identification of training needs. For example:

A needs analysis for students suggests that in order to adhere to curriculum standards, learners need to describe the indications for central venous line (CVL) placement. Within the hospital, a needs analysis suggests that physicians in training must demonstrate competence in placing CVLs in order to comply with accrediting body regulations. Furthermore, institutional data may also suggest that there is a high blood stream infection rate in a particular intensive care unit where staff does not feel empowered to speak up, based upon results of a safety attitude survey.

Based on the results of the needs analysis, the target audience, and level(s) of expertise, the critical requirements to include in the educational initiative should be identified. The target audience must be defined in consideration of whether the group is uniprofessional or interprofessional, the varying expertise levels inherent in the group, and the approximate size of the target audience. Because we work in teams in health care , we are interdependent. In this way, increased consideration and effort should be made to determine feasibility and applicability of interprofessional education .

In the next step, the expertise level or range of expertise within the target audience must be determined (see Table 14.1 for a description of each learner type). The level of expertise is not presumed by the learner’s level of training but rather should be assessed in each learner to determine their individual starting point. The characteristics of each learner type will inform the entire instructional design and development process. For example, a novice learner may have few past experiences upon which to base judgment. As a result, they are more reliant on established rules, standards, and protocols. In this way, the novice must gain knowledge prior to applying the material in a simulation. Moreover, the simulation should remain focused on a specific task or process and may include some direct coaching (scaffold building) during the simulation. Alternatively, a competent learner is less dependent on rules, algorithms, and analytic decision-making but rather relies on pattern recognition, previous experiences, and gut-feeling to make decisions. A competent learner therefore requires more autonomy in the learning process. This group of learners will benefit from more complex simulations that require decision-making without coaching during the simulation so that the learner can observe the results of their decisions. In professional environments, there may be a mix of expertise present for any learning situation. As the curriculum development process progresses, the design team must develop objectives and a range of expected learner actions for each level of expertise [1215].

Table 14.1 Levels of expertise [1215]

The size of the target audience has implications for both feasibility and scheduling. For small uniprofessional groups, a few training sessions to capture all learners may suffice. When the target audience spans an entire department or institution, the design team must determine the appropriate group size for each training class, the appropriate complement of caregivers that should be present for interprofessional training , the frequency at which the training should be offered to capture all individuals in a reasonable amount of time, and the number of faculty it will take to complete the training. Drawing upon social learning theory, all individuals may not need to participate in the simulation in order to learn. There may be a benefit to observing the simulation event. Special consideration should be made to engage observers more directly in the learning process by assigning specific areas of focus to direct their observations [16].

Boundaries to training must also be considered before designing a program. For students, educators may be constrained by the timing of the academic year, faculty who can implement simulations, and finding and training raters to assess competency. In professional environments, the design team must consider institutional policies for education of learners (is the education part of ‘mandatory’ education or do learners need to be paid for their time?), scheduling (are there particular days or times of day to avoid?), timing (does the training need to be completed by a certain date for regulatory purposes?), and location (based on the goals of the program, does the training need to occur in situ or in a simulation laboratory?). In response to these boundaries or constraints, the design team must brainstorm solutions to overcome these challenges to determine feasibility, appropriate length of the training, and an achievable timeline and plan for training and assessment.

Finally, the critical requirements or competencies should be defined. It should be determined if there are core competencies already described by a recognized accrediting body (e.g., the National League of Nursing, the Accreditation Council for Graduate Medical Education (ACGME) in the USA, or the Royal College of Physicians and Surgeons of Canada, CanMEDS) or by the home institution. The aforementioned accrediting bodies have outlined comprehensive competency frameworks and defined key competencies in terms of KSA, by level of learner, required to be a competent clinician. This information should be reviewed to determine which competencies can be included and observed in simulation-based training. If formal requirements are not predefined, the design team must list the competencies that will be included in the training. Once a comprehensive needs analysis is complete, the design team can begin designing the specific educational initiative.

Design

The first step in the design phase is to write a goal statement against which the success of the training initiative will be measured. Goal statements should be specific, measurable, achievable, result-focused, and time-bound [17]. Building on the example provided earlier, a goal statement for novice learners might be to provide simulation-based training for fourth-year medical students during the first 6 months of the academic year on placing CVLs using the Seldinger technique in order to increase their self-efficacy by x%. For an interprofessional group of learners that spans the expertise gradient (novice through expert), a goal might be to decrease blood stream infection rates in the intensive care unit (ICU) by 20 % in the following year by providing interprofessional simulation-based training on central line placement, teamwork, and communication.

Next, the specific learning objectives should be defined. Learning objectives describe the specific changes that the training course is meant to produce in the KSA of the learner: What can you reasonably expect the learner to know and be able to do at the end of the program and what change in attitudes are you aiming to achieve? For each objective, a performance statement, a set of conditions, and a set of standards should be incorporated. Knowledge and attitude objectives are less likely to be observable than skill. Learning objectives should be written using strong action verbs (see Table 14.2 for a reference on developing learning objectives) [18].

Table 14.2 Learning objectives [18]

Following the setting of goals and objectives, the next step is to select an instructional strategy based on several learning theories including self-determination theory, experiential learning theory, and cognitive load theory. Self-determination theory, which describes a learner’s willingness to learn, posits that learners must feel related to the group, feel a sense of competence, and feel a sense of autonomy. A safe environment for learning must be established at the start of all simulation-based training by establishing rules of engagement and maintaining confidentiality [19, 20]. Experiential learning theory suggests that adult learners learn through experiences and must engage in a continuous cycle which includes a concrete experience (a simulation), time to observe and reflect, the formation of abstract concepts (facilitated debriefing), and testing or experimenting in new situations (a second simulation or real-life experience) [21]. While simulation and debriefing map quite nicely on the cycle, the design team should also consider any prework and didactic information specifically for the novice learner who has no previous experience to draw from but relies on rules, algorithms, policies, etc. Cognitive load theory describes that in order to achieve effective learning, the cognitive load of learners should be kept at a minimum during the learning process as short term memory can only contain limited elements. It follows that prework, as well as the complexity of the simulations, should match the level of the learner so as not to impede learning by incorporating information and protocols that overcomplicate rather than simplify [19]. Moreover, providing learners with the tools to gain knowledge prior to coming to the experiential simulation lab will allow learners to process the information independently and then apply this knowledge in the simulation setting thereby avoiding a common pitfall of lecturing by the simulated bedside.

A second piece of designing the instructional strategy involves selecting the appropriate type of simulation or simulation technology and environment to achieve the objectives to match the level of the learner. Simulation technology can include screen-based simulation , task trainers, human patient simulators , live actors, or hybrid simulation which combines live actors and task trainers. In the example above, the appropriate technology for novice students learning the procedural steps of placing a CVL would be a task trainer designed for this purpose. This training could be accomplished in a simulation laboratory as opposed to the actual clinical environment because the goals of training are narrowly focused to the procedural skill. Novices benefit from time and space to learn, practice, and apply the step-by-step procedure, void of additional complexities and distractions that might be present in the actual clinical environment. On the other hand, if an interprofessional team is learning how to work together to maintain a sterile environment while placing a central line, the design team may utilize a task trainer and conduct the training in situ. In this way, the learners can practice maintaining sterility while placing a CVL surrounded by the physical barriers that are present in their native clinical environment.

Facilitation strategies for implementing the simulations and debriefing must be considered, again, based upon the level of the learner. For example, novice learners require more instruction and less facilitation . A strategy to consider for novices is scaffolding. This strategy allows the facilitator to provide support where cognitive structures are not sufficiently developed [22]. One way to incorporate scaffolds is to provide expert modeling and then coaching during skills training. Another is to build clinical pauses into simulations with human patient simulators at critical decision-making points to allow the facilitator to prompt reflection-in-action [23]. During this pause, the facilitator can expose the learners’ mental model, frame of mind, or thought process. The facilitator can then provide a scaffold by modeling their own thought process to create new mental models on the part of the learner. The facilitator can then coach or guide the learner to continue in the simulation. These scaffolds can be reorganized or eliminated as learners’ understanding increases. Competent, proficient, and expert groups require less instruction and more facilitation. The design of the course may include simulations without any pause followed by debriefing, allowing learners to reflect-on-action and form new concepts [24]. Furthermore, the course may include opportunities to practice or experiment with new knowledge. This can be accomplished either by allowing learners multiple opportunities to practice with procedural skills or by allowing learners to run through a clinical scenario a second time to apply new theories discovered during a debriefing. There are several models of debriefing described elsewhere in this book (see Chap. 3). An appropriate approach should be determined based on the level of the learner(s).

Development

Once the course is designed, the faculty, the simulation exercises (see Chap. 2), debriefing guides (see Chap. 3), and assessment tools (see Chap. 7) must be developed. The content experts, who may also serve as faculty for the training, should be taught the art and science of simulation design, implementation, and debriefing. It is important for faculty to have a general understanding of adult learning principles in order to create psychological safety for learners. Additionally, faculty should understand how to design simulation exercises, whether it is procedural skills on trainers or clinical scenarios, and implement them to achieve learning objectives. Finally, because deep learning may not happen with experience alone, faculty need to be trained on facilitating debriefing exercises and matching their instruction during debriefing to the level of learner. For example, debriefing novice learners may include more directed teaching methods, whereas debriefing competent, proficient, and even expert learners may require more guided reflection and discovery of mental models of the learner(s) or rationale for their specific behavior. Once a mental model is discovered, the faculty member can facilitate discussion among the group to explore multiple perspectives on that mental model and facilitate learning. Faculty development is described elsewhere in this book (see Chap. 15).

Once the faculty members are adequately trained, they can better engage in the development of simulation exercises to achieve learning objectives. In the design phase of the curriculum development process, learning objectives for the course are defined while in the development phase, specific objectives and clinical context for each simulation exercise are defined. Selecting the appropriate context is important as it establishes meaningful linkages with experiences and promotes connections among, knowledge, skill and experiences [19, 22]. Context can and should even be defined in procedural skills training so that learners can understand when and how the skill is utilized. One group describes a process of “identifying competencies within the context of a particular profession such that the assessment of competence is tied to learner’s performance of essential clinical activities that define the profession.” This cluster of competencies has been referred to as entrustable professional activities (EPAs). An EPA requires a learner to not only possess knowledge, skill, and attitude but to apply these through specific activities in the clinical environment to achieve optimal results [25, 26]. In this way, the design team should consider identifying EPAs upon which to base scenarios. This allows learners to acquire not only knowledge but also a sense of when and how to use that knowledge in the actual clinical setting. As an example, the context for novice student learners in the CVL example might be devoid of context and simply focus on the steps of line placement while the context for the interprofessional team might be placing the line while ensuring maximal sterility in a septic patient in the ICU, as well as performing the time-out, sterile field preparation, and necessary documentation.

The design team must next define the expected actions using the event-based approach to training (EBAT) . This list of expected actions for any competency may look different for each level of expertise. In order to create an opportunity for these actions, the design team should also embed triggers within the scenario script (see Chap. 2). Triggers are prompts for the facilitator to provide necessary events to meet the learning objectives. Please see Table 14.3 for an example of how to embed triggers into scenario script .

Table 14.3 Embedding triggers into scenario scripts

This list of expected actions and triggers allow the educator to establish a controlled and standardized learning experience. Moreover, this list can be easily combined with observational measurement tools to aid in debriefing and evaluation. For successful EBAT training, the design team should match learning objectives to triggers, define acceptable observable behaviors or expected actions, and script the scenario to ensure triggers are executed according to plan.

Finally, the design team should develop debriefing guides that outline the phases of debriefing, sample narrative text, and sample questions to include during each phase. Debriefing guides or scripts can assist the novice debriefer in following a structure to guide the learning process and ensure that key learning points are addressed in a standardized way (see Chap. 3) [27]. Furthermore, the guide can be structured to serve dual purposes: an instructor guide and an assessment tool to evaluate faculty on debriefing competencies such as that it sets the stage for an engaging learning experience, facilitates the debriefing in an organized way, or provides feedback to participants on their performance [28].

Implementation

The course should be piloted and refined as needed. The goals of the pilot are to provide an opportunity for faculty to practice implementation of the course and to test the simulations. Faculty should practice creating a safe environment, trial any task trainers, practice directing any clinical scenarios using the scenario template to execute triggers, and practice debriefing using the debriefing guide. The pilot may include other faculty or a subset of the target audience willing to participate and offer feedback to further shape the course. During the pilot, the design team should determine if the simulation activity allows faculty to properly observe and assess the predefined competences and if the debriefing guide adequately promotes discussion of these competencies. Following the pilot, the prework, simulation exercises, and facilitation guides should be revised and potentially piloted again.

Evaluation

The final phase in the curriculum development process is evaluation. Evaluation should include the assessment of the performance of the learners, as will be described in the next section of the chapter, and the evaluation of the effectiveness of the educational program. The evaluation plan should be developed alongside the curriculum development process. Ideally data should be collected, analyzed, and reviewed prior to the implementation of the program, and throughout the program to guide continuous improvement for learners, faculty, and the design team [9].

There are several evaluation types, including formative and summative assessments for both the individual and the program. Formative assessments matched with predefined competencies should be performed at each course offering, with the goal of identifying areas for improvement for the learner and the program, respectively. Alternatively, summative assessments of the learner focus on judging individual competence at a particular skill, or achievement of a milestone. Summative assessment of a program may determine if it has had an impact and if resources will continue to be allocated for future implementation [9].

Kirkpatrick describes four levels of evaluation of training programs: Level 1—Reaction, Level 2—Learning, Level 3—Behavior, and Level 4—Results (Table 14.4 [29]. Level 1 measures how learners reacted to the training and helps identify any topics that might be missing from the curriculum . This can be accomplished through a post-event questionnaire or focus group discussion. Level 2 measures what the learner has actually learned as a result of the training. In order to measure learning, KSAs should be measured prior to and after the training. This can be accomplished by observing expected actions during a simulation or on a written test. Pre-/post-evaluations may also be valuable. Level 3 describes how behavior has changed as a result of training and if the learners can apply what they have learned. Measuring behavior requires observation over time, either in the actual clinical environment or in the simulation laboratory. Observation tools can be generated during the scenario design process and should include the expected critical actions for each learning objective. Finally, Level 4 measures the impact of the training, using the problem and goal statements as described above (see Chap. 7).

Table 14.4 Kirkpatrick’s adapted hierarchy of evaluating educational outcomes [29]. (Reproduced with permission)

Program evaluation is critical to the educational process but challenging to measure. Due to time and resource constraints and ongoing learning in the actual clinical environment, it is challenging and often not feasible to determine how an educational intervention has impacted clinical outcomes, patient safety outcomes, or financial outcomes. Competency-based medical education (CBME) educators can more realistically focus on the impact their program has had on learning and transfer of that learning to application in a simulated environment and then the actual clinical environment.

Simulation for Competency-Based Education

CBE has gained considerable momentum over the past few years and may prove to be a catalyst that transforms health professional education worldwide. CBE can be conceptualized as “the education for the medical professional that is targeted at a fixed level of ability in one or more medical competencies” [30]. This description relies on a trajectory of development from the preclinical phase of professional school to the healthcare provider in practice. Ultimately, the goal of CBE is to produce graduates who provide high quality patient care from the moment they enter clinical medicine in school to the time of retirement. Traditional training models have fallen considerably short of this goal with substantial rates of preventable error that occur across different healthcare systems [3133]. While that error cannot be attributed entirely to individual practitioners (a large portion may relate to the teams and system they work within), there is also substantial variability in patient outcomes depending on where clinical training occurred [34], suggesting an opportunity to improve patient care through a competency-based approach to education. CBE focuses on accountability and curricular outcomes organized around competencies, promoting greater learner-centeredness and de-emphasizing time-based curricular design [35]. Achievement of competence is demonstrated through a progression of milestones or EPAs [25, 26]. As an example, the field of medicine holds a social contract with society where physicians receive status and respect , are granted the privilege to self-regulate their profession and receive substantial remuneration in exchange for the promise to provide competent, altruistic, and moral care that addresses the needs of individuals and society [36]. Multiple high-profile cases have outlined how medicine can improve its performance in this implicit agreement [37, 38].

Inherent to the use of CBE is the use of a competency framework such as CanMEDS [39], ACGME competencies [40], and the Scottish Doctor [41]. While the frameworks differ and are chosen to reflect the needs of the local environment, they all extend beyond medical knowledge/expertise and include domains such as communication and collaboration which align well with SBE—particularly as it is applied to crises resource management or team training (see Chap. 4). Additionally, SBE holds the promise to support skill development and demonstrate baseline competence before trainees perform complex procedures on patients, reducing complications and healthcare costs [42, 43]. This foundational simulation training may function to accelerate the development of expertise in the clinical environment, allowing for system optimization (e.g., expensive operating room (OR) time) [44]. Collateral effects of simulation-based instruction may influence the learning environment and improve skill acquisition of learners that do not actually participate in the simulation [45].

Current models of health professional education retain the silos of undergraduate education, postgraduate education (in the case of medicine), and CPD which may focus learners on the current tasks of the training (particularly during formal training programs) and impede the development of reflection and lifelong learning skills that are critical to improve future practice in an ongoing manner. Medical science is rapidly evolving and there needs to be greater investment to support practicing healthcare providers to incorporate new knowledge into their practice in real time [46]. Evidence from the CPD literature suggests that physician performance and health outcomes improve when the CPD activities are more interactive, use multiple methods, involve multiple exposures, are longer in duration, and are focused on activities that the physician believes to be important [47]. Well-designed SBE has the potential to meet many of these criteria and can form an important piece of CPD. Novel methods of instruction, such as debriefing without a formal debriefer present in the room, may help build capacity to integrate more simulation into CPD course offerings [48].

Two international CBE collaborative summits have been held over the past 5 years (2009 and 2013), with both scholarly and practical outputs [49]. Implementation of CBE has occurred in multiple specialties in multiple jurisdictions with several others planning to move to a CBE model in the coming years [49]. SBE can align very well with CBE as it allows for feedback from experts, repetitive practice across a range of difficulties that are required in skill development, and curricular integration [3].

Shifting the Assessment Paradigm for Competency-Based Education

CBE will require substantial change in our current approach to assessment. Assessment should be conceptualized as part of instructional design with a shift away from assessment of learning to assessment for learning [50]. This will require an emphasis on a robust, programmatic approach to assessment that ideally focuses on workplace-based formative assessment, rather than isolated high-stakes point in time-summative examinations. This is not to imply that there is not a role for high-stakes examination, as it can be useful to predict future patient outcomes [24], but rather that the opportunity for lower-stakes, more frequent assessment may be invaluable for learning on an ongoing basis [51]. Moving up to Miller’s top level of “Does” (Fig. 14.2) can only be achieved in the clinical environment, but simulation reaches the level of “Shows” and can be helpful as a piece of the assessment program to inform judgments on the overall competence of practitioners [52].

Fig. 14.2
figure 2

Miller’s framework for assessment. (Reproduced with permission of [52])

Simulation educators have historically focused on promoting high-fidelity training in order to improve the quality of education and assessment. Yet, the term fidelity has been problematic to define and qualify in the simulation community—we may therefore benefit by considering functional task alignment (the alignment between the simulator’s functional properties and the functional requirements of the task) to reflect how well the simulation-based assessment (SBA) truly allows the learner to “Show” how they might perform [53]. Ultimately, judgment of competence needs to be conducted by a collective, using the wisdom of the crowd (e.g., competence committee) to incorporate multiple assessments from multiple assessors using multiple tools across multiple situations to help determine competence and make progression decisions on the trainee. Subjective assessments and narrative descriptions may also form an important part of that assessment program [54]. The negative connotations of biased and unfair with subjective assessment do not hold true (though they can occur—as they also might with an objective measure). Much like the clinical environment, there may be opportunities to reduce information to a numerical score when appropriate (e.g., Glasgow Coma Scale score of 3 is identified universally as having the same clinical meaning to every healthcare provider), while there may be other instances where a more complete description would be helpful to make judgments (e.g., one would not hand over a pediatric ICU patient simply with a validated risk of mortality score (such as the Pediatric Index of Mortality 2, PIM2), but one would want more detailed, narrative descriptions of the patient on which to make important judgments and decisions).

Assessment programs should continue to be evaluated by their reliability , validity, acceptability, and cost (both financial and resource related) but should also be judged by their educational impact and catalytic effect where results and feedback are used in a manner that creates, enhances, and supports education [55]. Assessment will certainly need to be more continuous and frequent. It needs to be criterion-based and support learners to achieve developmental milestones [56]. Viewed with this lens, SBE may be important to CBE allowing the achievement of milestones that are associated with rare presentations in the clinical environment, as well as those that would pose significant risk to patients if they were not first assessed outside of clinical care. The educational as well as catalytic effect of SBE could be very positive, but further work needs to be conducted on how best to maintain a safe learning environment where trainees and faculty can feel comfortable making mistakes (and learning from them).

Robust assessment instruments /tools with evidence of validity and reliability will continue to be required, and faculty development around their use may be even more critical (see Chap. 7). Tools are only as good as the individual using them—it may be time for the healthcare professions to mandate teaching faculty to learn a core set of competencies in assessment, with accredited training programs providing ongoing professional development in assessment [57]. As any simulation-based researcher will report, it takes a substantial amount of time to calibrate assessors even with a tightly regulated script for the scenario .[48]. Assessment in the workplace complicates the ability to standardize scoring with tremendous variability in case presentation and will require clinical supervisors to understand some of these complexities and basic psychometrics of assessment. Reliability will only be achieved with rater training and adequate sampling of performance (i.e., content specificity should not allow overreliance on a single case). Additionally, there are likely to be content domains in which faculty need to develop their own knowledge and skills before they are able to accurately assess their junior colleagues (e.g., patient safety is a very important part of medical education in this decade, yet many of the practitioners trained previously would have limited formal knowledge in how to teach or assess it). See Table 14.5.

Table 14.5 Assessment in competency-based education

Challenges of Continuing Professional Development and Competency-Based Simulation Education

Challenges in Continuing Professional Development

Many concerns still exist about the feasibility and efficacy of SBE to solve the problem of improving quality of care. The many challenges of SBE used for CPD have been highlighted [58]. Although there is mounting evidence that lectures and bolus CPD courses are not effective for long-term knowledge and skills retention, it is often the preferred method for educational delivery. Simulation activities, despite attempts to create a safe learning environment are by their nature anxiety producing. Exposing deficiencies especially for more senior healthcare practitioners is a common concern and dissuades engagement by those participants. Reluctance to engage in interprofessional learning has been rooted in the traditional teaching models where meeting different learners’ objectives has been challenging. SBE offers the ability to engage in learning that mirrors the real working environment, yet uptake for CPD activities is still slow despite its inherent advantages. Finally, SBE requires educators with expertise in case preparation, facilitation , and in debriefing. Limited faculty with these skills and the considerable time required to prepare for these sessions are both barriers to implementing CPD programs, especially in smaller centers. Table 14.6 outlines these barriers and offers potential solutions to the implementation of SBE into our continuing education programs .

Table 14.6 Barriers and solutions to continuing professional development (CPD)

Challenges in Competency-Based Simulation Education

Ultimately SBE is a tool that offers many advantages over traditional education delivery such as lectures, courses, and workshops [2]. However, it cannot work in isolation and must be integrated into curricula to achieve the competencies or learning objectives set out by governing bodies across the various disciplines.

At the same time that SBE has flourished, CBE has become the new paradigm in professional medical education . Many specialties have embraced its theoretical advantages: focus on outcomes, emphasis on abilities derived from societal needs over knowledge, de-emphasis of time-based training, and promotion of learner-centered training to achieve milestones [35]. Despite these advantages, many concerns exist. Defining learning objectives that are both comprehensive yet not exhaustive is a challenge to medical educators. There is a fear that endless lists of competencies will overwhelm learners and reduce competencies into a series of tasks rather than what truly makes a healthcare provider. Another concern is that learners will focus on achieving milestones, “jumping over the hurdle” and achieving bare competence rather than striving for excellence. Scheduling of trainees at different stages has the potential to create administrative logistical challenges while trying to balance clinical needs. Although most trainees will complete training in a similar time frame as traditional curricula, some trainees will take considerably longer and will add to increased resources [35]. Simulation training embedded into these programs is expensive and requires educational expertise that is already in high demand. Finally, as highlighted previously, assessment tools and processes will need to be developed that are “more continuous and frequent, criterion-based, developmental, work-based where possible, …and involve the wisdom of group process in making judgments about trainee progress” [56].

Conclusions

High-Stakes Testing

SBE is being increasingly used for summative purposes [59]. These high-stakes decisions include passing a program, gaining certification or licensure, and maintenance of competence. SBE is ideally suited to measure competencies beyond traditional knowledge-based exams. Organizing bodies such as the ACGME and Royal College of Physicians and Surgeons of Canada (RCPSC) require that examinations are tailored towards skills that mimic the actual practice behaviors [4]. Assessment of most of the six core competencies of the ACGME/American Board of Medical Specialties (ABMS) and the seven CANMEDs roles of the RCPSC, for example, can readily be achieved using simulation-based environments. Use of simulation for high-stakes testing is emerging in many specialties. One example is the use of procedural simulation for carotid stenting where training and passing examinations are required for certification [60]. Use of simulation in high-stakes examination has also been reported in anesthesia, surgery, and internal medicine [61].

Another use of SBA is in the maintenance of certification. Many specialties require either recertification examinations (ABMS) or aggregation of hours in learning activities (RCPSC) in order to maintain certification. Pressure to ensure that these activities reflect patient care competencies rather than knowledge acquisition has led educators to incorporate simulation into these programs [62]. The use of simulation in maintenance of certification in anesthesia and surgery has been described [63]. In Canada, as part of its maintenance of certification program, the RCPSC recognizes and gives credits for learning activities. For example, attending a conference receives 1 credit per hour and reading a journal article 1 credit per article. In contrast, learners receive 3 credits for each hour of approved assessment-driven simulation activity [64].

Although there has been tremendous uptake of SBA, challenges still exist. Frequently, curricula do not always match the assessment. The concept that “assessment drives education” should serve as impetus to curricular development in SBE that is comprehensive and standardized. SBA-scoring strategies must be robust and achieve high degrees of reliability and validity. Experts in the clinical field (content experts), simulation, and measurement are all essential for high-stakes examinations. Further, SBA raters need to be qualified and properly trained with appropriate review of scoring rubrics and emphasis on rater consistency [59]. Finally, SBA is an expensive methodology and must be supported by professional and regulatory boards with the view that patient safety is worth the investment.

Role of National/Shared Curricula

Until recently, simulation programs have been built haphazardly. Sessions were developed locally and dependent on educators with simulation experience, on labs and equipment of variable quality , and on participant availability. Typically, programs were considered an add-on to other components of the education curriculum. As a result, there has been a push to develop standardized trainee-focused curricula that cover the core competencies of accredited training programs. Examples exist in the undergraduate medical education literature of attempts to incorporate a disaster management [65] and simulation-based pediatric clinical skills [66] into the curriculum. In postgraduate medical education, many centers have reported the development and evaluation of standardized simulation curricula. Examples include the specialties of pediatrics, surgery, emergency medicine, and pediatric emergency medicine [67] . Despite these recent attempts to develop simulation-based educational curricula, there is the absence of acceptance of standardized curricula at a national or international level. However, this is likely to change. Currently, in Canada, a group of pediatric emergency medicine physicians have just developed a national standardized simulation curriculum [68]. Additionally, a national pediatric residency program simulation curriculum is being developed while the anesthesiology specialty program is moving towards CBE and incorporating simulation into their curricula and evaluation process. It is only a matter of time before many programs follow suit, and SBE becomes an integral component of training and CPD programs.