The past two decades have seen rapidly growing interest in the use of simulation for clinical skills training, enhancing patient safety, and reducing medical and surgical errors [13]. Simulation technology is “now a central thread in the fabric of medical education” [4].

There have been numerous studies of the use of simulation in training, including randomized controlled trials [59]. The results of these investigations have provided sufficient evidence that simulation technology can result in many beneficial educational effects. In addition McGaghie et al. [2] states “a growing body of evidence shows that clinical skills acquired in medical simulation laboratory settings transfer directly to improved patient care practices and better outcomes.”

However, integration of simulation-based medical education into the curricula of both undergraduate and postgraduate medical education has been challenging. Although curricula exist, many are based on experiential notions and have not used a standardized approach to insure interoperability of curriculum design [10, 11]. Approaches to standardization have focused on developing frameworks and principles for the design of simulation curricula in medical and surgical skills [1216]. Furthermore, implementation guidelines for simulation provided by many approaches often are not structured in a format that conforms to accreditation requirements.

In the year 2012, members from the international Alliance of Surgical Simulation Training and Education (ASSET) developed an evidence-based framework for the design, validation, evaluation, and implementation of a generic simulation-based training curriculum template for any surgical procedure in any surgical specialty [15]. This framework represents the first step in establishing a uniform, international approach for development and validation of simulation curricula, yet is mainly for surgical training and as such follows a procedure-based approach for technical skills which might be less applicable to general clinical skills such as history taking and physical examination.

Guidelines for developing simulation training courses would ideally be embedded in a generally accepted, user-friendly approach to curriculum development that would also conform to academic accreditation requirements. The 6-step model proposed by Kern et al. [17] represents a widely used systematic approach to curriculum development that complies with the requirements and templates of many accrediting bodies and also links curricula to healthcare needs. The 6-steps are: problem identification and general needs assessment, targeted needs assessment, goals and objectives, educational strategies, implementation, and evaluation and feedback.

This paper describes a methodical, stepwise approach, based upon the above curriculum development model and principles of effective simulation design. It provides a consensus approach for creating high-quality simulation-based training curricula (courses) that will meet external accreditation standards and integrate effectively with complimentary courses to provide an efficient, coordinated overall curriculum or training program.

Methods

In order to develop a step-by-step model for simulation curriculum development in technical and other clinical skills, a review of literature was performed to identify relevant English language journal articles that described: (1) frameworks for the systematic development of curricula for simulation-based clinical training and (2) features and best practices of simulation-based medical education. The search covered 14 literature databases: BMJ, Cambridge Journals, EBSCO-Academic Search Complete, ISI Web of Knowledge, ISTOR, McGraw Hill Access Medicine, MD Consult, MEDLINE (EBSCO), Ovid, ProQuest MEDLINE, ScienceDirect, and Scopus. The search which was performed using the King Saud University Academic Digital Library (http://www.ac-knowledge.net/ksu) was limited to Health and Medicine Disciplines. The search timeframe extended between 1995 and 2015.

Keywords used for the first search query were framework or model for design, systematic curricula development, and simulation-based clinical training. The initial search revealed 3966 articles. Limiting the search so that framework or model for design was mentioned as a primary focus in the title and that systematic curricula development and simulation-based clinical training were included as keywords revealed 33 articles.

The screening criteria for selection of articles in the first query included: (1) relevance to search purpose (2) description of framework or model based on concepts and theory of simulation-based clinical training (3) not describing a model for genetic or biological simulation, and (4) number of citations of the article. Critical review was performed for the eligible retrieved five articles.

The most cited article by Aggarwal et al. [12] describes a framework for systematic training and assessment of technical skills (61 citations). It describes in detail the theory and concept underlying the suggested framework and identifies the importance of displaying cognitive, technical and personal skills required to meet the needs of patients and society. The second most highly cited article by Zevin et al. [15] describes a consensus-based framework for the design, validation and implementation of simulation-based training curricula in surgery (14 citations). It has the advantage of describing a framework developed by international expert consensus using current evidence-based methodological principles for simulation-based training including those described by Aggarwal et al. [12].

Keywords used for the second query were simulation features, best practices, and effective learning. Initial search revealed 19,317 articles. Limiting the search so that simulation features were the primary focus mentioned in the abstract with best practices and effective learning as keywords revealed 93 articles.

The screening criteria for selection of articles in the second query included: (1) relevance to search purpose; (2) original research or systematic and/or critical reviews with appropriate methodology; (3) number of citations of the article; and (4) evidence-based conclusions.

Applying these criteria, six articles were judged as important. The article by Issenberg et al. [18] was cited 1520 times, evidence-based and highly relevant. Moreover, it provided the key simulation criteria and best practices that were critically reviewed by McGaghie et al. [4.] in 2010 (238 citations). These features were also investigated in a comparative systematic review and meta-analysis of the effectiveness of instructional design features in simulation-based education by Cook et al. [19], cited 55 times. It confirmed the effectiveness of the features reached by Issenberg et al. [18]. The same features were also used to develop a best evidence practical AMEE guide, of 41 citations [20]. In an article with 25 citations, Paskis and Peile [21] reported evidence that six of the ten features listed in the Issenberg et al. article [18] appeared to be of particular value for final-year medical students views on simulation-based teaching.

Based upon an analysis of the above articles, we identified 14 features and best practices of simulation-based medical education that include the following (with comment in parentheses for emphasis by the authors):

  1. 1.

    Providing feedback (with special emphasis on formative and summative feedback)

  2. 2.

    Curriculum integration (interoperability so that simulation training complements other parts of the overall curriculum in order to most effectively and efficiently achieve common program goals and objectives)

  3. 3.

    Range of task difficulty level (starting with simple and progressing to complex)

  4. 4.

    Deliberate practice (and the critical role of formative feedback)

  5. 5.

    Individualized learning (with educational experiences where learners are active participants, not passive bystanders)

  6. 6.

    Controlled environment (where learners can make, detect, and correct errors without adverse consequences—the “permission to fail” concept)

  7. 7.

    Clearly stated objectives (with measurable outcomes, including metrics development, that lead to learners mastering skills and provide evidence of curriculum validity)

  8. 8.

    Simulation fidelity (matched to the level of the learner)

  9. 9.

    Skill acquisition and maintenance (to insure lifelong learning and continuous assessment)

  10. 10.

    Mastery learning (with reference to the Dreyfus and Dreyfus [22] pyramid of mastery that progresses from novice, competent, proficient and expert to master)

  11. 11.

    Transfer-to-practice [transfer of training from simulation training to clinical practice and transfer effectiveness ratio (a ratio of time needed to train in a simulation course as opposed to the time needed to train in the clinical setting) as part of program evaluation and validation]

  12. 12.

    Team training (including communication skills, professionalism, leadership and other “soft” skills)

  13. 13.

    Professional context (emphasizing self-awareness, self-assessment, and inter-professional relationships)

  14. 14.

    Instructor training and education (both faculty development to ensure expertise in the use of simulation methods and “training-the-trainer courses” so that trained faculty can teach future faculty)

The next step of the development process included the adaptation of the 6-step approach for curriculum development [17] to the above elucidated best practices for simulation-based medical education. This took place for the main steps for curriculum development and the detailed description of the application of each step to simulation courses.

The adaptation process included the separation of the sixth step of the 6-step approach to curriculum development which is “evaluation and feedback” into two steps; Step 5: “Individual Assessment and Feedback,” and Step 6: “Program Evaluation and Curriculum Validation.” We moved “Implementation,” the fifth step in Kern and colleagues model, to Step 7 in our model to emphasize the need for implementing evaluation as well as instructional strategies.

A questionnaire was developed to elicit input from experts in simulation and curriculum development on our proposed model for developing simulation curricula. We emailed the questionnaire, with invitations to participate, to members of the Alliance for Surgical Simulation Education and Training—a group of senior leadership representatives of 16 US surgical societies and nine surgical societies from other countries, to two international health profession education experts, and to an international expert in curriculum development.

The survey items were evaluated on a three-tiered Likert scale (to include without modifications, include after doing modifications, and not include). A column was provided to allow our expert panel to comment on each step and its application for simulation description. The standard inter-rater reliability (IRR ≥ 0.80) agreement on each item was considered as an acceptable consensus level that would qualify the item to be included in the final model. Two e-mail reminders were sent to the invited experts over a period of 1 month.

The comments and suggested modifications received were reviewed by the authors. Consensus was reached regarding revisions in the model based upon the comments.

Results

Questionnaire

We recruited 17 members for our expert review panel. Our panel consisted of 14 expert members of the ASSET group including two of the authors of the ASSET framework article on the design of simulation-based curricula for surgery [15], two international experts in health professions education, and an international expert in curriculum development/principal editor of the book describing the 6-step model for curriculum development [17]. The panel consisted of MD (or equivalent), MD/MPH, MD/MS, MD/PhD specialist leaders/educators in general internal medicine, general surgery, obstetrics/gynecology, orthopedic surgery, pediatrics, pediatric surgery, pediatric otolaryngology, plastic surgery, and trauma surgery, and two Ph.D. level educators specializing in simulation training.

There was unanimous agreement that the seven proposed curriculum development steps should be included in the model. There were several comments affirming the model; several other comments suggested modest revision in the descriptors of simulation design. The latter comments were reviewed by the authors, and a consensus response developed. Table 1 displays these comments and our responses.

Table 1 Expert panel member comments/suggestions and responses (received from six panel members)

The model

The model which is summarized in Table 2 includes the following seven steps: problem identification and general needs assessment; targeted needs assessment; cognitive, psychomotor, and team-based training objectives; educational strategies; assessment of and feedback to individual learners; program evaluation, including curriculum validation; and implementation.

Table 2 Model for developing simulation curricula

Step 1 Problem identification and general needs assessment is done at the international, national, or regional rather than institutional level and generally includes a review of the relevant published literature and other available information (e.g., public health statistics, documents prepared by accrediting bodies, or professional organizations). It may involve use of expert consultants or the collection of new information. It grounds the simulation curriculum in societal needs and makes it more generalizable. It also helps curriculum developers to build upon what already exists.

Step 2 Targeted needs assessment is done at the institutional level and involves collection of data on the institution’s targeted learners and learning environment. It grounds the curriculum in the specific needs of these learners, which may be different than the needs of learners in general. It also helps integrating the simulation curriculum effectively with an institution’s overall curriculum. It identifies stakeholders and involves them in the process of curriculum design.

Step 3 Goals and objectives involves the development of general goals as well as specific measurable objectives that direct educational content and methods as well as evaluation. For simulation curricula, this involves developing and defining objectives (outcomes measures) with their specific quantifiable numeric value (centimeters, degrees, pounds, etc.) or unambiguous definition for non-numeric values (e.g., cross-check is defined as the assistant repeated the surgeon’s request verbatim) for both cognitive prerequisites (didactic) and specific psychomotor skills that are to be taught. It usually involves developing objectives for both individual and team performances.

Step 4 Educational strategies, which includes both the content to be taught and the educational methods to be used, require special attention in simulation. Skills and procedures need to be “deconstructed” into component tasks (task analysis and task deconstruction). Common and important errors in performance must follow the same deconstruction process for their outcomes measures and metrics definition process. It is critical to “teach” errors and how to identify, avoid, or remediate them. Simulation is the only educational tool that provides “permission to fail” in a technical procedure without injury to a patient. By quantitatively measuring performance, a benchmark for performance can be set for desired performance and the learner can be trained to proficiency of the benchmark. This provides two advantages: the learner will be trained to 100 % correct (passing score) and the training is personalized to the learner’s capabilities, since the training is until a benchmark is reached. The variable is not final score, but rather number of trials (until the 100 % benchmark score is reached). Because the development of simulation curricula is usually resource intensive, one generally wants to develop content validity evidence for what is being taught by literature review and by involving consensus input from multiple stakeholders and experts. For example, input ideally would be obtained from both clinical experts and practitioners, who must integrate what is being taught into everyday practice. It is important to match the fidelity of the simulation to the level of the learner. Novices can learn from simple low-fidelity models, whereas advanced learners need more complex higher-fidelity simulation. Also the complexity of the tasks which the learner must achieve increases as they approach benchmark levels. This is the essence of proficiency-based training. [9] These processes can also enhance a curriculum’s chances of being accepted for publication and of being used by others. Faculty development and train-the-trainer instructions are particularly important as faculty must be skilled in simulation methods, provision of feedback, and small group facilitation. Other considerations are listed in Table 2.

Steps 5 and 6 Individual assessment/feedback and program evaluation. Evaluation has been broken down into two separate components: individual assessment/feedback and overall program evaluation. Individual assessment: This component is particularly important in simulation training, because the assessment instruments developed are also an integral part of Step 4, Educational strategies—training and assessment are two sides of the same coin. Assessment tools are used for formative evaluation (immediate feedback when an error occurs) and for summative evaluation (reflection, as well as final report of total performance). Because simulation is resource intensive and often used for high-stake summative assessments, special attention needs to be paid to the reliability and validity of the instruments that are being used. Likewise, consideration should be given to usability, dependability, and practicality—without these evaluations, most curricula will not be used. Program evaluation: This component is critical to ensure that the learners are actually achieving the outcomes desired and that the curriculum evolves with improvements in educational practice, technology, and knowledge. Additional considerations are listed in Table 2.

Step 7 Implementation relates to all steps of the curricular development process, which are then integrated into the “instructional design” of how to present the course to the learner for both training and assessment purposes. Again, because simulation is a resource-intensive educational methodology, the curriculum developer wants to ensure that it is being used strategically and efficiently in terms of the overall curriculum design, goals, and objectives. This will help engender the necessary resources and political support.

As pointed out by Kern et al. [17], “curriculum development does not usually proceed in sequence, one step at a time. Rather, it is a dynamic, interactive process that continues and the curriculum evolves, based on evaluation results, changes in resources, targeted learners, and the material requiring mastery.”

Discussion

The systematic process described in this study included a review of existing approaches to curriculum development and simulation design. We selected the curriculum development model developed by Kern et al. [17], the most prevalent one used in medical education, as a starting point. We revised this model slightly to emphasize the separate importance between individual and program assessment in simulation training. We integrated into our model a synthesis of principles of simulation design articulated by Issenberg et al. [18], McGaghie et al. [4], and Zevin et al. [15] in the ASSET framework for surgical simulation training, as well as principles derived from earlier work by others authors [1214, 16]. The proposed model was then reviewed and commented upon by a team of 17 educational experts. Based upon their input, revisions were made. In this paper, we present this revised consensus approach for developing simulation curricula.

Not surprisingly, there was unanimous agreement among our educational experts regarding the curriculum development steps in our model, which were based upon Kern’s 6-step model that has already gained international recognition [17]. The only variation was that our model divided Kern’s evaluation step into two steps: Individual Assessment/Feedback and Program Evaluation. Most comments and suggestions for further revision related to the parts of our model that addressed the application of the curriculum development steps to simulation education. They ranged from comments on defining expertise to assessment methodology to educational content to enhancing the practicality of the model (Table 1). Our final model (Table 2) incorporates revisions based upon this feedback.

Why is such a model desirable? First, simulation has become an increasingly prevalent and important educational methodology. Subsequent to the acceptance in 2002 of the validity and value of simulation as a powerful new educational tool [5], there has been a rapid rise and implementation of simulation, especially for technical skills and also for non-technical skills, such as teamwork, leadership, and communication.

Second, while some simulation curricula have used rigorously developed methodologies [23] and assessment tools, e.g., objective-structured assessment of technical skill (OSATS) [24], involving medical educators, behavioral psychologists, psychometricians, and human interface technologists, most new curricula are based upon individual experience, use less than rigorous methodologies, and are developed for single courses rather than for broad application. The result is multiple approaches to teaching similar skills, each duplicating and conflicting/competing with the others. They rarely use appropriate outcomes measures and metrics and rarely subject their curriculum to a vigorous validation process.

Third, simulation training is resource intensive and expensive. Therefore, there is a higher demand for well-designed interventions, lack of duplication in development efforts, and proof of efficacy.

Fourth, it has become apparent that simulation is an educational methodology and not a curriculum per se. And like any educational tool, it needs to be applied judiciously in the development of an overall curriculum.

Finally, it is desirable to have a model that is in accord with external requirements and accreditations standards. Some professional bodies require simulation training. For example, the Accreditation Council on Graduate Medical Education requires simulation centers for General Surgery training [25], and the American Board of Surgery requires skills testing in the fundamentals of laparoscopic surgery (FLS) curriculum for certification in this area [26]. The American College of Surgeons (ACS) has established an accreditation process to insure the quality of simulation centers, with extensive emphasis on the curricula and courses to be developed. This was accomplished by establishing the American College of Surgeons Accredited Education Institutes (ACS-AEI), including a section devoted exclusively to standards for “curriculum” development. [27, 28]. The Accreditation Council on Graduate Medical Education (ACGME) [25], which accredits graduate medical education, and the Liaison Committee on Medical Education (LCME) [29], which licenses medical schools as well as the World Federation Of Medical Education (WFME) [30], require formal curricula that include goals, objectives, and explicitly articulated educational and evaluation strategies. It is reasonable to assume that such external requirements may soon extend to most curricula based upon simulation methodology.

One of the most common concerns of our reviewers related to the definitions of proficiency or expertise. Commonly, expert clinicians are defined as individuals who are in the top percentiles of performance and can adapt to varying circumstances. Ideally, proficiency should be based on behaviors that have been shown to improve clinical outcomes. In the absence of such data, some have used mathematical approaches to define benchmark levels. For example, in one approach, the mean of the performance of expert/experienced practitioners is established as the “proficiency” level. One and two standard deviations above proficiency correspond to the Dreyfus and Dreyfus [22] categories of true expert and master, whereas one and two standard deviations below proficiency reflect competent and beginner, respectively. The learning curve for a task is frequently defined as ≥2 consecutive trials of the task with no continued improvement. The practical application is that setting a benchmark provides the minimal level to which the learner must perform two consecutive trials without error. This is “training to proficiency.” The independent variable is “score” (which is 100 %), and the dependent variable is number of trials to achieve proficiency. The use of training and assessment to a proficiency benchmark provides a quantitative method of determining performance readiness for a given skill, task, or procedure. While the approach to determine benchmarks may vary, we feel that it should be based on a methodical approach for which there is evidence of at least content and construct, if not predictive, validity.

To our knowledge, ours is the first consensus model that has integrated accepted principles of curriculum development and simulation design in a manner that meets accreditation standards and is generally applicable across health professions specialties. It has been developed using a methodical approach to establish content validity including literature review, consensus on the major steps by a panel of 17 educational experts, and consensus revision by the authors in our description of the application of the steps to simulation curricula after review of expert panel comments.

A limitation is that the model has not yet been applied broadly and may need to be revised as it is applied and evaluated for other forms of validity beyond content validity as well as for its usability/practicality. While clearly this template will not satisfy the opinions of all experts in the field of medical education, it is a substantive ‘first edition’ which is intended to be dynamic and modifiable. It has been simplified to be able to be utilized and adapted to support a variety of specialties, similar to other models (like FLS and OSATS) [23, 24] that began as initial efforts that were subsequently validated, modified, and adapted.

Conclusion

We hope that the model for developing simulation curricula developed in this study proves useful to simulation educators across disciplines, by providing a template for integrating established principles of simulation design and curriculum development. If widely adopted, such a template has the potential to reduce the variability in and increase the quality of simulation-based curricula, some of which can be disseminated broadly. It could also increase the efficiency of overall curriculum development within institutions.