Keywords

Introduction

Standardized patients (SPs) are individuals who have been carefully selected and trained to portray a patient in order to teach and/or evaluate the clinical skills of a healthcare provider. Originally conceived and developed by Dr. Howard Barrows [1] in 1964, the SP has become increasingly popular in healthcare and is considered the method of choice for evaluating three of the six common competencies [2] now recognized across the continuum of medicine.

The standardized patient, originally called a “programmed patient,” has evolved over 50 years from an informal tool to a ubiquitous, highly sound modality for teaching and evaluating a broad array of competencies for diverse groups of trainees within and outside of healthcare. Barrows and Abrahamson [1] developed the technique of the standardized patient in the early 1960s as a tool for clinical skill instruction and assessment. During a consensus conference devoted to the use of standardized patients in medical education, Barrows [3] described the development of this unique modality. He was responsible for acquiring patients for the Board Examinations in Neurology and Psychiatry and soon realized that the use of real patients was not only physically straining but also detrimental to the nature of the examination. Patients would tire and alter their responses depending upon the examiner, time of day, and other situational factors.

Barrows also recognized the need for a more feasible teaching and assessment tool while instructing his medical students. In order to aid in the assessment of his neurology clerks, he coached a woman model from the art department to simulate paraplegia, bilateral positive Babinski signs, dissociated sensory loss, and a blind eye. She was also coached to portray the emotional tone of an actual patient displaying these troubling symptoms. Following each encounter with a clerk, she would report on his/her performance. Although initially controversial and slow to gain acceptance, this unique standardized format eventually caught the attention of clinical faculty and became a common tool in the instruction and assessment of clinical skills across all disciplines of healthcare.

This chapter will present a historical overview of the standardized patient, prevalence, current uses, and challenges for using SPs to teach and assess clinical competence. A framework for initiating, developing, executing, and appraising SP encounters will be provided to guide the process of integrating SP encounters within medical and healthcare professionals’ education curricula.

Common Terminology and Uses

The title “SP” is used, often interchangeably, to refer to several different roles. Originally, Dr. Barrows referred to his SPs as “programmed” or “simulated patients.” Simulated patients refer to those SPs trained to portray a role so accurately that they cannot be distinguished from the actual patient. The term “standardized patient” was first introduced almost two decades later by psychometrician, Dr. Geoff Norman and colleagues [4], to iterate the high degree of reproducibility and standardization of the simulation required to offer a large number of trainees a consistent experience. Accurate simulation is necessary but not sufficient for a standardized patient. Today, the term “SP” is used interchangeably to refer to simulated or standardized patient or participant. Refer to Appendix for a description of the common SP roles and assessment formats.

The use of standardized patients has increased dramatically, particularly over the past three decades. A recent census by the Liaison Committee on Medical Education [5] reported that 96% of US Medical Schools have integrated OSCEs/standardized patients for teaching and assessment within their curricula. Today, ironically, the internal medicine clerkships (85%) are more likely, and the neurology clerkships (28%) are least likely to incorporate SPs into their curricula for the assessment of clinical knowledge and skills. Approximately 50% or more of the clerkships in internal medicine, OBGYN, family medicine, psychiatry, and surgery use SP exams/OSCEs to determine part of their students’ grades. In addition, many medical licensing and specialty boards in the United States and Canada are using standardized patients to certify physician competencies [6,7]. Notable examples include (a) the National Board of Medical Examiners, (b) the Medical Council of Canada, (c) the Royal College of Physicians and Surgeons of Canada, and (d) the Corporation of Medical Professionals of Quebec. Numerous other healthcare education programs are also using SPs for instruction and assessment [8].

As SP methodologies expanded and became more sophisticated, professionals working in the field became more specialized, and a new role of educator was born – the “SP educator.” An international association of SP educators (ASPE) was created in 1991 to develop professionals, advance the field, and establish standards of practice for training SPs, designing, and evaluating encounters. Its diverse members include SP educators from allopathic medicine, as well as osteopathy, dentistry, pharmacy, veterinary medicine, allied health, and others [9]. This association is currently collaborating with the Society for Simulation in Healthcare to develop certification standards for professional SP educators [10].

Standardized patients can be used in a variety of ways to teach, reinforce, and/or assess competencies of healthcare professionals. In the early decades of SP use, their primary role was formative and instructional. Although still widely used for the teaching and reinforcement of clinical skills, as the practice has matured and evidence has mounted, SPs have been integrated into certification and licensure examinations in the United States, Canada, and increasing numbers of other countries.

Advantages and Challenges

There are several advantages to using SPs to train and evaluate healthcare professionals. Table 13.1 includes a summary of the advantages and challenges of using SPs in teaching and assessment. The most notable advantages include increased opportunity for direct observation of trainees in clinical practice, protection of real patients from trainees’ novice skills and repeated encounters, standard and flexible case presentation, and ability to provide feedback and evaluation data on multiple common competencies.

Table 13.1 Advantages and challenges of using SPs

Historically, prior to the 1960s, clinical teaching and evaluation methods consisted primarily of classroom lecture, gross anatomy labs, bedside rounds of real patients, informal faculty observations, oral examinations, and multiple-choice tests. Prior to the introduction of the SP, there was no objective method for assessing the clinical skills of trainees. This early simulation opened doors to what is today a recommended educational practice for training and certification of healthcare professionals across geographic and discipline boundaries.

An SP can be trained to consistently reproduce the history, emotional tone, communicative style, and physical signs of an actual patient without placing stress upon a real patient. Standardized patients also provide faculty with a standard assessment format. Learners are assessed interacting with the same patient portraying the same history, physical signs, and emotional content. Unlike actual patients, SPs are more flexible in types of cases presented and can be available at any time during the day and for extended periods of time. SPs can be trained to accurately and consistently record student performance and provide constructive feedback to the student, greatly reducing the amount of time needed by clinical faculty members to directly observe trainees in practice.

SPs can also be trained to perform certain basic clinical procedures and, in turn, aid in the instruction of trainees. When SPs are integrated within high-fidelity simulation experiences (hybrid simulations), they enhance the authenticity of the experience and provide increased opportunities to teach and assess additional competencies, namely, interpersonal communication skills and professionalism.

Framework for SP Encounters

Effective SP encounters require a systematic approach to development. Figure 13.1 displays a stepwise approach for preparing and using SPs in healthcare education. The IDEA framework is intended to serve as a guide to developing SP encounters, particularly for SP assessments (OSCE, CPX). Each of the four steps (Initiate, Develop, Execute, and Appraise) is described throughout this chapter with increased emphasis on the first two. This framework does not generally apply to a single large-group SP demonstration and assumes that the SP encounters are repeated to teach and/or evaluate multiple trainees.

Fig. 13.1
figure 00131figure 00131

The IDEA Framework: a stepwise process for preparing and using SPs in healthcare education

Initiate: Initial Considerations for Using SPs in Healthcare

When initiating an SP encounter, it is important to clarify the purpose of the exercise. Encounters are intended to reinforce, instruct, and/or assess competencies. When designing an encounter to assess, the intent should be clarified as formative (to monitor progress during instruction) or summative (to determine performance at end of instruction). Due to the increased stakes, summative assessments generally require more rigor during the development process. However, all assessments require some evidence to support the credibility of the encounters and the performance data they derive. It is recommended that the developer consider at the earliest stage how she/he will determine whether the encounter performed as it was intended. How will you appraise the quality of the SP encounter? Attention paid to the content, construction of data gathering tools, training of SPs, and methods for setting performance standards throughout the process will result in more defensible SP encounters.

Goals and Objectives

Next, it is important to consider the competencies and performance outcomes that the encounter is intended to address. SP encounters are best suited for teaching and assessing patient care as well as the affective competencies, interpersonal and communication skills, and professionalism. If the specific competencies and objectives have been predefined by a course or curriculum, it is simply a matter of determining which are most appropriate to address via the SP encounter. Alternatively, there are several methods to determine the aggregate needs of trainees, including the research literature, association and society recommendations, accreditation standards, direct observations of performance, questionnaires, formal interviews, focus group, performance/test data, and sentinel events. This data will help the developers understand the current as well as the ideal state of performance and/or approach to the problem. For detailed information on assessing the needs of trainees, see Kern et al. [11].

Writing specific and measurable objectives for an SP encounter and overall SP activity is an important task in directing the process and determining what specific strategies are most effective. Details on writing educational goals and objectives can be found in Amin and Eng [12]. Sample trainee objectives for a single SP encounter include: (1) to obtain an appropriate history (over the telephone) from a mother regarding her 15-month old child suffering from symptoms of an acute febrile illness, (2) differentiate relatively mild conditions from emergent medical conditions requiring expeditious care, and (3) formulate an appropriate differential diagnosis based on the information gathered from the mother and on the physical exam findings provided to the learner. This encounter which focused on gathering history and formulating a differential diagnosis was included within an extended performance assessment with the overall objective to provide a standard and objective measure of medical students’ clinical skills at the end of the third year of undergraduate medical education [13].

It is strongly recommended, particularly when multiple encounters are planned, that a blueprint be initiated to guide and direct the process. The blueprint specifies the competencies and objectives to be addressed by each encounter and how they align to the curricula [14]. This ensures that there is a link to what is being taught, reinforced, or assessed to the broader curriculum. The content of the blueprint will vary according to the nature of the encounter. If developers plan an SP assessment, then this document may be supplemented by a table of specifications, which is a more detailed description of the design.

Questions to consider at this stage in the development process include: What is the need? What do trainees need to improve upon? What external mandates require or support the use of the educational encounter? What is currently being done to address the need? What is the ideal or suggested approach for addressing the need? What data support the need for the educational encounter? What is the current performance level of trainees? What will the trainees be able to do as a result of the educational encounter?

Strategies

SPs can be used in a variety of ways, and the strategy should derive from the answers to the above questions. In general, these strategies fall within two areas: instruction or assessment. Several sample strategies for each of these broad areas will be described below.

Instruction

Although SPs are used to enhance a large group or didactic lecture to demonstrate a procedure, counseling technique, or steps in physical examination, their greatest instructional benefit is found in small group or individualized sessions where experiential learning can take place. For example, an SP meets with a small group of trainees and a facilitator. He or she is interviewed and/or physically examined by the facilitator as a demonstration, and then the trainees are provided the opportunity to perform while others observe. One variation to the small group encounter is the “time-in/time-out” technique, originally developed by Barrows et al. [15]. During this session, the interview or physical exam is stopped at various points in order for the group to engage in discussion, question the thoughts and ideas of the trainee, and provide immediate feedback on his/her performance. During the “time-out,” the SP acts as though he or she is no longer present and suspends the simulation while remaining silent and passive. When “time-in” is called, the SP continues as if nothing had happened and is unaware of the discussions which just occurred. This format allows for flexible deliberate practice, reinforcement of critical behaviors, peer engagement, and delivery of immediate feedback on performance.

Another example of an effective individual or small group teaching encounter is a hybrid [16] or combination SP with high-fidelity technical simulators. During these encounters, technical simulators are integrated with the SP to provide the opportunity for trainees to learn and/or demonstrate skills that would be otherwise impossible or impractical to simulate during an SP encounter (i.e., lung sounds, cardiac arrest, labor, and childbirth). For example, Siassakos et al. [17] used a hybrid simulation to teach medical students the delivery skills of a baby with shoulder dystocia as well as effective communication strategies with the patient. In this simulation encounter, the SP was integrated with a pelvic model task trainer and draped to appear as though the model were her own body. The SP simulated the high-risk labor and delivery and subsequently provided feedback to the students on their communication skills. This hybrid encounter was found to improve the communication skills of these randomly assigned students compared to those of their control group.

Another example which highlights the importance of creating patient-focused encounters is by Yudkowsky and colleagues [18]. They compared the performance of medical students suturing a bench model (suture skin training model including simulated skin, fat, and muscular tissue) with and without SP integration. When the model was attached to the SP and draped as though his/her actual arm, student performance in technical suturing as well as communication was significantly weaker compared to performance under non-patient conditions. This research negates the assumption that novice trainees can translate newly acquired skills directly to a patient encounter and reminds us of the importance of context and fidelity in performance assessment [19]. The authors concluded that the hybrid encounter provided a necessary, intermediate, safe opportunity to further hone skills prior to real patient exposure.

Most SP encounters are single sessions, where the trainee interacts with the SP one time and not over a series of encounters. However, the use of SPs in longitudinal encounters has been shown to be very effective, particularly for teaching the complexities of disease progression and the family dynamics of healthcare. In order to address family-centered care objectives, Pugnaire et al. [20] developed the concept of the standardized family at the University of Massachusetts Medical School. Initially termed the “McQ Standardized Family Curriculum,” these longitudinal instructional SP encounters involved multiple participants portraying different family members. Medical students participated in several encounters over the course of their clerkship. More recently, Lewis et al. [21] incorporated SPs into a series of three encounters to teach residents how to diagnose, treat, and manage a progressive disease (viz., Alzheimer’s) over the 10-year course of the “patient’s” illness. The sessions took place over three consecutive days, but the time was lapsed over these years to simulate the progressive illness. These sessions provided opportunities for deliberate practice and reinforced continuity of care.

Over 40 years ago, Barrows [22] described the use of SPs in a clinical “laboratory in living anatomy” where lay persons were examined by medical students as a supplement to major sections of the gross anatomy cadaver lab. This early experiential use of SPs led to the more formal role of the Patient Instructor and the Gynecological Teaching Associate (GTA), developed by Stillman et al. [23] and Kretzschmar [24], respectively.

SPs are used to teach physical examination skills. The patient instructor is a lay person with or without physical findings who has been carefully trained to undergo a physical examination by a trainee and then provide feedback and instruction on the performance using a detailed checklist designed by a physician [23]. At the University of Geneva [25], faculty used PIs with rheumatoid arthritis to train 3rd year medical students to interview and perform a focused musculoskeletal exam. The carefully trained PIs were able to provide instruction on examination and communication skills during a 60-min encounter. As a result, students’ medical knowledge, focused history taking, and musculoskeletal exam skills improved significantly from pre- to post session. The authors concluded that “grasping the psychological, emotional, social, professional and family aspects of the disease may largely be due to the direct contact with real patients, and being able to vividly report their illness and feelings. It suggests that the intervention of patient-instructors really adds another dimension to traditional teaching.” Henriksen and Ringsted [26] found that PIs fostered patient-centered educational relationships among allied health students and their PIs. When compared to traditional, faculty-led teaching encounters, a class delivered by PIs with rheumatism trained to teach basic joint examination skills and “respectful patient contact” was perceived by the learners as a safer environment for learning basic skills.

The GTA has evolved over the years, and the majority of medical schools now use these specialized SPs to help teach novice trainees how to perform the gynecologic, pelvic, and breast examinations and how to communicate effectively with their patient as they do so. Kretzschmar described the qualities the GTA bring to the instructional session as including “sensitivity as a woman, educational skill in pelvic examination instruction, knowledge of female pelvic anatomy and physiology, and, most important, sophisticated interpersonal skills to help medical students learn in a nonthreatening environment.” Today, the vast majority of medical schools now use GTAs to teach students these invasive examinations (typically during year 2). Male urological teaching associates (UTAs) have also been used to teach trainees how to perform the male urogenital exam and effective communication strategies when doing so. UTAs have been shown to significantly reduce the anxiety experienced by second year medical students when performing their first male urogenital examination, particularly with regard to female students [27].

Assessment

There is a vast amount of research which supports the use of SP assessment as a method for gathering clinical performance data on trainees [28]. A detailed review of this literature is beyond the scope of this chapter. However, defensible performance assessments depend on ensuring quality throughout the development as well as execution phases and should be considered early as an initial step in the process for preparing and using SPs in assessment. Norcini et al. [29] outline several criteria for good assessment which should be followed when initiating, developing, executing, and appraising SP assessments. These include validity or coherence, reproducibility or consistency, equivalence, feasibility, educational effect, catalytic effect, and acceptability.

One related consideration is “content specificity”: Performance on a single encounter does not transfer to subsequent encounters [30]. This phenomenon has theoretical as well as practical implications on the design, administration, and evaluation of performance assessments. Namely, decisions based on a single encounter are indefensible because they cannot be generalized. When making summative decisions about trainee performance, the most defensible approach is to use multiple methods across multiple settings and aggregate the information to make an informed decision about trainee competence. According to Van der Vleuten and Schuwirth, “one measure is no measure,” and multiple SP encounters are warranted for making evidence-based decisions on trainee performance [31].

There are several additional practical decisions which need to be made at this stage in the process. Resources, including costs, time, staff, and space, may limit the choices and are important considerations at this stage in the process. Questions include: Does the encounter align to the goals and objectives of the training program? Is the purpose of the encounter to measure or certify performance or to inform and instruct trainees? Formative or summative? Limited or extended? Do the goals and objectives warrant hybrid simulation or unannounced SP encounters? What is the optimal and practical number of encounters needed to meet the goals and objectives of the exercise? What is the anticipated length of the encounters? How many individual SPs will be needed to portray a single role? Is it possible to collaborate in order to share or adapt existing resources?

SP assessments commonly take one of the following two formats: (a) objective-structured clinical examination (OSCE) or the (b) clinical practice examination (CPX). The OSCE is a limited performance assessment consisting of several brief (5–10-min) stations where the student performs a very focused task, such as a knee examination, fundoscopic examination, or EKG reading [32, 33]. Conversely, the CPX is an extended performance assessment consisting of several long (15–50-min) stations where the student interacts with patients in an unstructured environment [15]. Unlike the OSCE format, trainees are not given specific instructions in a CPX. Consequently, the CPX is more realistic to the clinical environment and provides information about trainees’ abilities to interact with a patient, initiate a session, and incorporate skills of history taking, physical examination, and patient education.

As in any written test, the format of the performance assessment should be driven by its purpose. If, for example, faculty are interested in knowing how novice trainees are performing specific fragmented tasks such as physical examination or radiology interpretation, then the OSCE format would be suitable. If, however, the faculty are interested in knowing how they are performing more complex and integrated clinical skills such as patient education, data gathering, and management, then the CPX or unannounced SP formats would be an ideal choice. As stated earlier, the primary advantages of using SP encounters to assess performance include the ability to provide highly authentic and standard test conditions for all trainees and focus the measurement on the specific learning objectives of the curriculum. SP assessments are ideally suited to provide performance information on the third and fourth levels of Miller’s [34] hierarchy: Does the trainee “show how” he is able to perform and does that performance transfer to what he actually “does” in the workplace?

The current USMLE Step 2CS [35] is an extended clinical practice exam of twelve 15-min SP encounters. Each encounter is followed by a 10-min post encounter where the student completes an electronic patient note. The total testing time is 8 h. The examination is administered at five testing facilities across the United States where students are expected to gather data during the SP encounters and document their findings in a post-encounter exercise. SPs evaluate the data gathering performance, including history taking, physical examination, and communication/interpersonal skills. Synthetic models, mannequins, and/or simulators may be incorporated within encounters to assess invasive physical examination skills.

Another less common but highly authentic method for the assessment of healthcare providers is the unannounced SP encounter. During these incognito sessions, SPs are embedded into the regular patient schedule of the practitioner, who is blinded to the real versus simulated patient. Numerous studies have shown that these SPs can go undetected by the physicians in both ambulatory and inpatient settings [36]. The general purpose of these encounters is to evaluate actual performance in practice: Miller’s [34] ubiquitous fourth and highest (“does”) level of clinical competence. For example, Ozuah and Reznik [37] used unannounced SPs to evaluate the effect of an educational intervention to train pediatric residents’ skills at classifying the severity of their asthmatic patients. Six individual SPs were trained to simulate patients with four unique severities of asthma and were then embedded within the regular ambulatory clinics of the residents. Their identities were unknown to the trainee and the preceptor, and those residents who received the education were significantly better able to appropriately classify their patients’ conditions in the true clinical environment.

Develop: Considerations and Steps for Designing SP Encounters

The development of SP encounters will vary depending on the purpose, objectives and format of the exercise. This section will describe several important aspects of developing SP encounters including case development, recruitment, hiring, and training. Table 13.2 lists several questions to consider at this stage of SP encounter development.

Table 13.2 Questions for evaluating SP assessments

Nature of Encounter

If the nature of the encounter is instructional and the SP is expected to provide direct instruction to the trainee, a guide describing this process should also be developed. The guide will vary greatly depending on several factors, including the duration, nature, and objectives of the encounter. Sample contents include a summary of the encounter and its relationship to the curriculum; PI qualifications and training requirements; schedules; policies; relevant teaching aids or models and instructions for their use during sessions; instructional resources including texts, chapters, or videotaped demonstrations; and models for teaching focused procedures or examinations. For a sample training manual for PIs to teach behavioral counseling skills, see Crandall et al. [38].

An encounter incorporated within a high-stakes assessment intended to determine promotion or grades will require evidence to support the validity, reliability, and acceptability of the scores. In this case, the individual encounter would be considered in relation to the broader context of the overall assessment as well as the evaluation system within which it is placed. For principles of “good assessment,” see the consensus statement of the 2010 Ottawa Conference [29] which is an international biennial forum on assessment of competence in healthcare education.

Case Development

Although not all encounters require the SP to simulate a patient experience, the majority requires a case scenario be developed which describes in varying detail his/her role, affect, demographics, and medical and social history. The degree of detail will vary according to the purpose of the encounter. For those which are lower stakes or those that do not require standardization, the case scenario will be less detailed and may simply outline the expectations and provide a brief summary of his character. Conversely, a high-stakes encounter which includes simulation and physical examination will require a fully detailed scenario and guidelines for simulating the role and the physical findings. Typically, a standardized encounter would require a case scenario which includes a summary, a description of the patient’s presentation and emotional tone, his current and past medical history, lifestyle preferences and habits, and family and social history. If the SP is expected to assess performance, the scenario will also include a tool for recording the performance and a guide to carefully describe its use. If the SP is expected to provide verbal or written feedback to the trainee, a guide describing this process should also be included.

Although issues related to psychometrics are beyond the scope of this chapter, the text below will describe a standard process for developing a single SP encounter intended for use in one of multiple stations included in a performance assessment. For a comprehensive description of psychometric matters, see the Practice Guide to the Evaluation of Clinical Competence [39] and the Standards for Educational and Psychological Testing [40].

In order to facilitate the case selection and development processes, physician educators should be engaged as clinical case consultants. Often if encounters are designed to assess trainee performance, multiple physician educators will be surveyed to determine what they believe to be the most important topics or challenges to include, the key factors which are critical to performance, and how much weight should be placed upon these factors.

Once the topic or presenting complaint for the case has been selected, this information should be added to the blueprint described above. The next step is to gather pertinent details from the physician educator. Ideally, the case scenario will be based on an actual patient with all identifying information removed prior to use. This will make the encounter more authentic and ease the development process. Additionally, Nestel and Kneebone [41] recommend a method for “authenticating SP roles” which integrates actual patients into all phases of the process, including case development, training, and delivery. They argue that SP assessments may reflect professional but not real patient judgments and by involving actual patients into the process, a more authentic encounter will result. This recommendation is further supported by a recent consensus statement [29] which calls for the incorporation of the perspectives of patients and the public within assessment criteria. One effective strategy for increasing the realism of a case is to videotape actual patient interviews about their experiences. The affect, language, and oral history expressed by the actual patients can then be used to develop the case and train the SPs.

A common guide to case development is based on the works of Scott, Brannaman, Struijk, and Ambrozy (see Table 13.3). The 15 items listed in Table 13.3 have been adapted from the Standardized Patient Case Development Workbook [42]. Once these questions are addressed, typically the SP educator will transpose the information and draft training materials, the SP rating scale/checklist, and a guide for its use. See Wallace [43] Appendix A for samples of each of the above for a single case. This draft will then be reviewed by a group of experts. One method for gathering content-related evidence to support the validity of the encounter is to survey physician experts regarding several aspects of the encounter. This information would then be used to further refine the materials. Sample content evaluation questions include: (1) Does the encounter reinforce or measure competencies that are necessary for a (level) trainee? (2) Does the encounter reinforce or measure competencies that are aligned to curricular objectives? (3) How often would you expect a (level) trainee to perform such tasks during his/her training? (4) Does the encounter require tasks or skills infrequently encountered in practice that may result in high patient risk if performed poorly? (5) Is the context of the encounter ­realistic in that it presents a situation similar to one that a provider might encounter in professional practice? (6) Does the encounter represent tasks that have been assessed elsewhere either in writing or on direct observation?

Table 13.3 SP encounter pertinent details

Very little attention has been paid to the development of SP cases and related materials in the published literature. In a comprehensive review of literature over a 32-year period, Gorter et al. [44] found only 12 articles which reported specifically on the development of SP checklists in internal medicine. They encourage the publication and transparency of these processes in order to further develop reliable and valid instruments. Despite the lack of attention in published reports, the design and use of the instruments used by the SP to document and/or rate performance is critical to the quality of the data derived from the encounter. Simple binary (done versus not done) items are frequently used to determine whether a particular question was asked or behavior was performed. Other formats are typically used to gather perspectives on communication and professionalism skills, such as Likert scales and free text response options. Overall global ratings of performance are also effective, and although a combination of formats is most valuable, there is evidence to suggest that these ratings are more reliable than checklist scores alone [45].

SP Recruitment and Hiring

There are several qualities to consider when recruiting individuals to serve as SPs. The minimum qualifications will vary depending upon the nature of the role and the SP encounter. For example, if hired to serve as a PI, then teaching skills and basic anatomical knowledge would be necessary. If recruiting someone to portray a highly emotional case, an individual with acting experience and/or training would be beneficial. In addition to these qualities, it is important to screen potential SPs for any bias against medical professionals. SPs with hidden (even subconscious) agendas may disrupt or detract from the encounters. A screening question to address this issue includes “Tell us about your feelings towards and experiences with physicians and other healthcare professionals.” Additionally, it is important to determine if the candidate has had any personal/familial negative experiences with the role she is being recruited to portray. Although we do not have empirical data to support this recommendation, common sense tells us that repeated portrayal of a highly emotive case which happens to resemble a personal experience may be unsettling to the SP. Examples include receiving news that she has breast cancer, portraying a child abuser, or a patient suffering from recent loss of a parent. Identifying potential challenging attitudes and conflicting personal experiences in advance will prevent potential problems in the execution phase.

Identifying quality SPs can be a challenge. Several recruiting resources include: theater groups or centers, volunteer offices, schools, minority groups, and student clubs. Advertisements in newsletters, intranet, and social media will typically generate a large number of applicants, and depending on the need, this may be excessive. The most successful recruiting source is your current pool of SPs. Referrals from existing SPs have been reported as the most successful method for identifying quality applicants [45].

It is ideal if individual SPs are not overexposed to the trainees: Attempts should be made to avoid hiring SPs that have worked with the same trainees in the past. Always arrange a face-to-face meeting with a potential SP before hiring and agree to a trial period if multiple encounters. After the applicant completes an application, there are several topics to address during the interview including those listed in Table 13.4.

Table 13.4 SP interview questions

The amount of payment SPs receive for their work varies according to the role (Table 13.5), encounter format, expectations, and geography. A survey of US and Canadian SP Programs [46] revealed that the average hourly amount paid to SPs for training was $15 USD, $16 USD for role portrayal, and $48 USD for being examined and teaching invasive physical examination skills. The rates were slightly higher in the western and northeastern US regions.

Table 13.5 Qualities of standardized patients according to role

SP Training

The training of SPs is critical to a successful encounter. Figure 13.2 displays a common training process for SPs for a simulated, standardized, encounter with expectations for assessment of trainees’ skills. This process is intended to serve as a model and should be adapted to suit the needs of the encounter and the SPs being trained. Unfortunately, there is a lack of evidence to support specific training SP methods. A comprehensive review of SP research reports [47] found that, although data from SP encounters is frequently used to make decisions about efficacy of the research outcomes, less than 40% of authors made any reference to the methods for training the SPs and/or raters.

Fig. 13.2
figure 00132figure 00132

Common training process for SP encounters

For a comprehensive text on training SPs, see Coaching Standardized Patients for Use in the Assessment of Clinical Competence [43]. Wallace described six skills critical to an effective simulated standardized performance. The following should be attended to throughout the training process:

  1. 1.

    Realistic portrayal of the patient

  2. 2.

    Appropriate and unerring responses to whatever the student says or does

  3. 3.

    Accurate observation of the medical student’s behavior

  4. 4.

    Flawless recall of the student’s behavior

  5. 5.

    Accurate completion of the checklist

  6. 6.

    Effective feedback to the student (written or verbal) on how the patient experienced the interaction with the student.

As stated above, the type and duration of training will vary depending upon the expectations and purpose of the encounter. Details regarding training sessions for role portrayal, instruction, evaluation, and feedback will be described below. Training SPs to simulate a standardized role with no further expectations will take 1–2 h during one or two sessions. If you expect them to document and/or rate the performance of the trainees, an additional 2–3-h session will be required. A trial run with all major players and simulated trainees to rehearse the actual encounter is strongly encouraged. This session typically takes an additional 1–3 h and, depending on performances, may lead to additional training. In 2009, Howley et al. [46] surveyed SP Programs throughout the USA and Canada and found that the average amount of time reported to train a new SP before performing his role was 5.5 (SD  =  5) and was reported by the majority of respondents as being variable according to the type of encounter. For example, if expected to teach trainees, the amount of preparation will be significantly lowered if the PI has prior training in healthcare delivery.

Regardless of the role that the SP is to being trained to perform, all SPs should be oriented to the use of SPs in healthcare, policies and procedures of the program, and general expectations of the role. It is also beneficial to share the perspectives of trainees and other SPs who participated in similar encounters to highlight the importance of the contribution he/she is about to make to healthcare education.

Role Portrayal

After the initial orientation, the SP reviews the case scenario with the SP educator. If multiple individuals are being hired to portray the same role, the SPs should participate as a group. Standardization should be clearly defined, and its impact on their performance should be made explicit throughout the training. During a second session, the SP reviews the case in greater depth with the SP educator. If available, videotaped samples of the actual or similar case should be shown to demonstrate desired performance. Spontaneous versus elicited information should be carefully differentiated, and the SPs should have the opportunity to role-play as the patient while receiving constructive feedback on their performances. A clinical case consultant also meets with the SPs to review clinical details and if relevant, describe and demonstrate any physical findings. In order to provide the SPs with greater understanding of the encounter, the consultant should also demonstrate the interview and/or physical examination while each SP portrays the role. The final training session should be a trial run with all the major players including simulated trainees to provide the SPs with an authentic preparatory experience. During this trial, the SP educator and the clinical consultant should evaluate the performance of all SPs and provide constructive comments for enhancing their portrayal. See Box 13.1 for an SP Critique Form for role portrayal. These questions should be asked multiple times for each SP during the training process and throughout the session for continuous quality improvement. Depending on performance during the trial run, additional training may be required to fully prepare an SP for his role. As a final reminder, prior to the initial SP encounter, several “do’s and don’ts” of simulation should be reviewed (see Box 13.2 for sample).

Teaching

Patient instructors, including GTAs and UTAs, will often participate in multiple training methods which typically includes an apprentice approach. After initial recruitment and orientation, she/he would observe sessions led by experienced PIs, then serve as a model and secondary instructor for the exam, and finally as an associate instructor. Depending on the expectations of the PI role, the training may range from 8 to 40 h prior to participation and additional hours to maintain skills. General participation and training requirements for PIs include (1) health screening examination for all new and returning PIs, (2) universal precautions training, (3) independent study of anatomy and focused physical examination, (4) instructional video review of the examination, (5) practice sessions, (6) performance evaluation by physician and fellow PIs, and (7) ongoing performance evaluation for quality assurance and to enhance standardization of instruction across associates.

Evaluation/Rating

There is strong data to support the use of SPs to evaluate history taking, physical examination, and communications skills of (particularly junior) trainees [48, 49]. If an SP is expected to document or evaluate performance, it is imperative that she/he be trained to do so in an accurate and unbiased manner. The goals of this session are to familiarize the SPs with the instrument(s) and to ensure that they are able to recall and document/rate performance according to the predetermined criteria. The instrument(s) used to document or rate performance should be reviewed item-by-item for clarity and intent. A guide or supplement should accompany the evaluation instruments which clearly defines each item in behavioral terms. The instruments should be completed immediately after each encounter to increase recall and accuracy. A training technique to increase the accuracy of ratings is to review and call attention to errors commonly made by SPs (and raters in general) when completing scales. Sample effects include halo/horn, stereotyping, Hawthorne, rater drift, personal perception, and recency. Several vignettes are developed, each depicting one of these errors, and the SPs are expected to determine which error is being made in each example and discuss its impact on the performance rating.

Another effective method for training SPs to use evaluation tools includes showing a videotaped previous SP encounter and asking the SPs to individually complete the instrument based on the performance observed in the encounter. The instrument may be completed during (or immediately after) the encounter, repeat with another sample encounter, and require the SPs to complete the instrument afterwards via recall. Afterwards, collect the instruments and tally the results for visual presentation. The SPs then discuss, as a group, those items about which they disagree. Rating scales can be particularly challenging in forming consensus, but in general, a behaviorally anchored scale will result in greater agreement of ratings. If necessary, replay the videotape to resolve any misunderstood behaviors that arise during the training exercise.

Feedback

One of the greatest benefits of SP encounters is the immediate feedback delivered by the SP to the trainee. Whether provided in writing or orally, training SPs to provide constructive feedback to the trainees is critically important. As in other areas, the training content and duration will vary according to the nature of the role and the purpose of the encounter. There are several resources available for feedback training which can be readily adapted to suit the needs of a particular encounter [5052].

The primary goal of this training session is to equip the SPs with the knowledge and skills to provide quality constructive feedback to the trainees. Feedback is defined as “information communicated to the learner that is intended to modify the learner’s thinking or behavior for the purpose of improved learning” [53]. SPs should be trained to deliver feedback that is descriptive and nonevaluative. The focus of the feedback should be consistent with the intent and expertise of the SP. Unless the SPs are serving as trained instructors, the SP should limit the feedback to how the patient felt during the encounter. In other words, feedback regarding clinical skills should be reserved for those faculty or others who hold this expertise. The SOAP model and DESC script are two effective methods for training SPs to frame and deliver constructive feedback to trainees [51].

Once the parameters and principles of the feedback have been reviewed, training should continue with opportunities for the SPs to put this knowledge into practice. To begin, ask the SPs to view a videotaped encounter and assume the role of the SP in the video. Immediately afterwards, ask the SP to deliver feedback to the “trainee” who in this exercise is simulated by another SP or a staff member. The SP role-plays delivering feedback while an observer critiques the performance. Refer to Box 13.3 for sample questions to guide the critique.

The SPs ability to provide constructive written feedback should not be ignored. Many of the same principles of constructive feedback apply to both oral and written communications. One method for reinforcing the SPs writing skills is to provide a series of feedback statements, some of which are inappropriate. Ask the SPs to review each statement and, when appropriate, rewrite to reflect a more constructive comment.

Trial Run

After SPs have been trained to perform their various roles (simulator, evaluator, and/or instructor), it is important to provide an opportunity to trial the encounter. These dress rehearsals should proceed as the actual event to allow for final preparation and fine-tuning of performance. Depending on the nature of the encounter/s, the objectives of the trial run may include: provide SPs with a better understanding of the format of the encounter, critique the SPs role portrayal, determine the SPs evaluation skills, orient and train staff, and test technical equipment. Simulated trainees should be invited to participate and provide feedback on the encounter, including the SPs portrayal. Although minor revisions to the cases may be made between the trial and the first encounter, it is preferable for the materials to be in final form prior to this session. Videotape review of the encounter/s with feedback provided to the SPs is an excellent method to further enhance SP performance and reinforce training objectives. The SP Critique Forms (Boxes 13.1 and 13.3) described above can be used to provide this feedback.

Execute and Appraise: Steps to Administer and Evaluate SP Encounters

The administration of SP encounters will vary by level of complexity and use of outcomes. This final section will summarize several recommendations for ensuring a well-run encounter. However, the efforts expended earlier to align the encounter to relevant and appropriate objectives; to recruit, hire, and train SPs suited to perform and evaluate trainee performance; and to construct sound training and scoring materials will go a long way to strengthen the encounter and the outcomes it yields.

Major Players

Directing an SP encounter can be a very complex task. Depending on the number of simultaneous encounters, the number of roles, the nature of the cases, and the number of trainees, the production may require dozens of SPs and staff support. See Table 13.6 for a description of major players and their roles in an SP assessment.

Table 13.6 OSCE/SP assessment major players and their roles

Orientation/Briefing

As with any educational offering, the orientation of the trainees to the SP encounter is critical to the overall quality of the experience. Trainees should know the purpose and expectations of the encounter/s; they should be made aware of the quality of the educational experience, how it is aligned to their curricula, instructions on how to progress through the encounter/s, implications of their performance, and how they can provide feedback on the encounter/s for future enhancements. Ideally, trainees should be able to self-prepare for the encounter by reviewing relevant literature, training videos, policies and procedures, etc. These preparation strategies are consistent with Knowles et al.’s [54] assumptions of adult learners, including that they need to know what they are going to experience, how it applies to their daily practice, and how they can self-direct their learning.

When orienting and executing SP encounters, it is important to maintain fidelity by minimizing interactions with the SPs outside of the encounters. Trainees should not see the SPs until they greet them in the simulation. In addition, an individual SP should not engage with the same trainees while portraying different roles. Although this may be impractical, steps should be taken to avoid overexposure to individual SPs. During an encounter, the SP should always maintain his character (with the notable exception of the “time-in, time-out” format). If the trainee breaks role, the SP should not reciprocate.

Quality Assurance

Intra-evaluation methods, such as inter-rater agreement and case portrayal checks, should be implemented to monitor quality. Woehr and Huffcutt [55] found that raters who were trained on the standards and dimensionality for assigning ratings were more accurate and objective in their appraisals of performance. Specific methods include an SP Critique Form (Box 13.1), or a similar tool, to audit the accuracy and realism of the role. If multiple encounters are required over an extended period of time, critiques should be done periodically to assess performance. Similarly, the SPs delivery of written and oral feedback should also be monitored to prevent possible performance drift (see Box 13.3). Similar quality assurance measures have been shown to significantly reduce performance errors by SPs [56]. A second approach to assuring quality is to introduce additional raters in the process. A second rater views the encounter (in real or lapsed time) and completes the same rating and checklist instruments of the SP. An assessment of the inter-rater agreement will help determine if the ratings are consistent and if individual SPs need further training or recalibrating.

Debriefing

Although there are clear guidelines for debriefing trainees following simulation encounters [57], there is a paucity of published reports on debriefing SPs. It is important to debrief or de-role the SP following the encounter. This is particularly important for those cases which are physically or emotionally challenging. Methods used to detach the SP from his role include discussions about his orientation and trainee behaviors during the sessions. Casual conversations about future plans and life outside of their SP role will also facilitate the debrief process. The goal of this process is to release tensions, show appreciation for the work, distance the SP from the emotions of the role, and allow the SP to convey his feelings and experiences about his performance [58].

Evaluation

The evaluation of the encounter should be integrated throughout the entire IDEA process. Data to defend the quality of the encounter is gathered initially when multiple stakeholders are involved in identifying the needs of the trainees, in developing the case and associated materials, and in training the SPs. Evaluation of the outcomes is critical to assess the overall value of the offering as well as areas for future enhancement. Appraisal evidences for the validity, reliability, and acceptability of the data resulting from performance assessments were described earlier. This evidence determined the utility of the SP assessment in making formative and summative decisions.

A common 4-step linear model by Kirkpatrick and Kirkpatrick [59] can be used to appraise the encounter/s, particularly instructional strategies. This model includes the following progressive outcomes: (1) reaction to the offering (how he felt about the experience), (2) whether learning occurred (pre to post differences in performance), (3) whether behavior was effected (generalizable to actual behaviors in practice), and (4) whether this produced results in improvements in patient care or system enhancements (impactful on his patients or the system in which he practices). The majority of SP encounters have focused on the levels 1 and 2 of this model with participant surveys and pre-posttests of performance and/or knowledge to measure the effect of the encounter on knowledge, comprehension, and/or application. Levels 3 and 4 are relatively difficult to measure; however, if the encounter/s can be attributed to positive changes at these levels, the outcomes are commendable.

Conclusion

Whether the purpose is to certify a level of achievement, provide feedback to trainees about their clinical skills, or provide faculty with information about curriculum effectiveness, standardized patients will continue to play a vital role in the education of our healthcare professionals. Although the development of optimal SP encounters requires time, commitment, and resources, the reward is our ability to instruct and assess trainees in a safe, authentic, and patient-centered environment.