Abstract
This opening chapter is an indictment of traditional clinical education in the health professions. Traditional clinical education in the health professions is not effective to achieve goals of skill acquisition, use, and maintenance by individuals or teams. The chapter argues from evidence that the time honored “see one, do one, teach one” approach to clinical education—the so-called apprenticeship model—for doctors, nurses, and other health professionals is obsolete. Introductory framing remarks and an example of medical education featuring mastery learning compared to traditional medical education are followed by three chapter sections: (a) historical origins of clinical education; (b) current state-of-affairs in clinical education including uneven educational opportunities, learner evaluation and feedback, and clinical practice outcomes; and (c) new directions for clinical education, namely, the learning sciences, active learning, deliberate practice, reliable measurement with feedback, and mastery learning.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Active learning
- Clinical education
- Deliberate practice
- Feedback
- Learning sciences
- Mastery learning
- Reliable measurement
- Simulation-based education
What does the American public expect when accessing the healthcare system? While expectations vary between individuals, most Americans expect to receive high-quality medical care from well-trained physicians and other members of the healthcare team. US medical schools graduate nearly 19,000 students each year (https://www.aamc.org/download/321532/data/factstableb2-2pdf) and certify them fit for graduate medical education (GME) in core residency programs such as internal medicine, general surgery, neurology, and pediatrics. US nurse education programs produce over 105,000 graduates at the basic RN level annually (http://www.nln.org.newsroom/nursing-education-statistics/graduations-from-rn-programs). Can we say with confidence that all of these health professionals are ready to make the transition to graduate education or practice and provide skilled healthcare to their patients? Unfortunately, the answer is no. During a 15-year journey, our research group has rigorously assessed common clinical skills of hundreds of physicians-in-training and their supervisors. Despite receiving diplomas from prestigious medical schools and often having much clinical experience, we have consistently found weak performance of core clinical skills such as bedside procedures and patient and family communication. This book recounts our journey to understand the issues surrounding the development of health professions expertise and to develop a path forward that ensures that health professions graduates are competent to care for patients.
Medical education research data can tell a powerful story about the problem we aim to solve and the solution we propose—mastery learning. Figure 1.1 presents data from a mastery learning skill acquisition study involving 58 internal medicine (IM) residents and 36 neurology residents learning to perform lumbar puncture (LP) [1]. Lumbar punctures are bedside procedures performed by medical professionals to obtain cerebrospinal fluid (CSF) and evaluate patients for central nervous system conditions such as life-threatening infections or spread of cancerous tumors. The IM residents were all in the first postgraduate year (PGY-1) of training at the McGaw Medical Center of Northwestern University in Chicago after earning MD degrees from medical schools across the United States. The neurology residents were PGY-2, PGY-3, and PGY-4 volunteers for this cohort study drawn from three other academic medical centers in metropolitan Chicago. All of the neurology residents had experience with the LP procedure that they learned using traditional, learn-by-doing, bedside methods practicing on real patients.
The IM residents had little or no LP experience. The IM residents started LP learning with a pretest on a mannequin using a 21-item LP skills checklist. The IM residents then experienced a systematic LP mastery learning skill acquisition curriculum involving feedback about pretest performance, deliberate practice (DP) of LP skills, formative assessments, frequent actionable feedback, and coaching and more practice for at least 3 hours in a simulation laboratory. The IM residents were assessed to see if they met or surpassed a minimum passing standard (MPS) on the skills checklist set earlier by an expert panel. Posttest scores (after training completion) from the PGY-1 IM residents were compared to scores of the neurology residents.
The research report shows that one of the 58 IM residents met the MPS at pretest and 55 of the 58 (95%) met the MPS at posttest after the 3-hour simulation-based curriculum. The three IM residents who did not reach the MPS at immediate posttest later reached the goal with less than 1 hour of more practice. This is a 107% improvement from pretest to posttest measured as LP checklist performance by the IM residents.
Figure 1.1 also shows that by contrast, only 2 of 36 (6%) of the traditionally trained PGY-2, PGY-3, and PGY-4 neurology residents met the MPS despite years of experience and performing multiple LPs on real patients. This study also revealed two surprising findings about the traditionally trained neurology residents not shown in Fig. 1.1. First, nearly 50% of the PGY-2, PGY-3, and PGY-4 neurology residents could not report the correct anatomical location for the procedure. They did not know where to stick the needle. Second, over 40% of the neurology residents could not list routine tests (glucose, cell count, protein, Gram stain, culture) to be ordered for the CSF after the fluid sample was drawn. They did not know about basic laboratory medicine.
Publication of the educational findings from this cohort study in the journal Neurology prompted a strong statement from a journal editorial which stated that these findings were a clear “wake-up call” regarding traditional methods of medical education and questioned whether these methods are “enough to ensure the best education, and thus the best care for patients” [2].
This research example is one short chapter in a long story about today’s approaches to clinical education in the health professions. As the LP example illustrates, traditional clinical health professions education grounded in clinical experience produces uneven results that do not meet the expectations of the profession or the public. Other examples address the now well-known finding that clinical experience alone—expressed as either years of medical practice or number of performed clinical procedures—is not a proxy for medical competence [3, 4].
A recent report from the National Academies of Science, Engineering, and Medicine titled, Improving Diagnosis in Health Care [5], demonstrates that traditional experiential health professions education produces many clinicians with variable diagnostic acuity. The report makes recommendations about improving diagnostic education for healthcare providers and also identifies a number of areas of performance that could be improved including:
-
Clinical reasoning
-
Teamwork
-
Communication with patients, their families, and other healthcare professionals
-
Appropriate use of diagnostic tests and the application of these results on subsequent decision making
-
Use of health information technology
These areas of performance improvement make up the majority of the daily tasks done by healthcare providers in clinical practice.
The idea of “excellence for all,” a foundation principle of mastery learning, is a far cry from the expectations and measured outcomes that are now achieved in most settings of health professions education. Student nurses, physicians, pharmacists, occupational therapists, and many other health professions advance through clinical education programs where training time is fixed and learning outcomes vary widely. This is despite ambitious goals to educate students, residents, and fellows to deliver uniformly safe and effective healthcare under supervision and when working autonomously as individuals and teams.
The times are changing in health professions education. Awareness is growing that traditional, experienced-based models of clinical education are antiquated and ineffective [6, 7]. There are at least three reasons for this awakening. First, technological advances in the biomedical, engineering, and behavioral sciences are growing exponentially every year. New education models are needed to realistically prepare clinicians for the future of the professions [8]. Second, there is a growing emphasis across the health professions on using rigorously measured learning outcomes as benchmarks for student curriculum progress. The nursing profession is moving toward outcomes and competencies as education targets for graduates at several levels [9]. Undergraduate medical education is now focused on Core Entrustable Professional Activities [EPAs] for Entering Residency as a set of minimum outcome expectations [10]. Analogous “milestones” for graduate medical education aim to bring greater uniformity to specialty curricula and rigor to educational outcome measurement [11, 12]. These innovations are a big step toward improved accountability in health professions education which has been diffuse or lacking historically. Third, health professions education has become increasingly reliant on simulation technology with deliberate practice as a method of instruction and a platform for research [13, 14]. This is due to a growing body of evidence that simulation is superior to traditional clinical education on grounds of effectiveness [15], cost [16] (Chap. 19), and patient safety [17] (Chap. 16).
This opening chapter has three sections. The first section traces the historical origins of clinical medical education from antiquity through the middle ages to the early twentieth century. Other health professions such as dentistry, nursing, midwifery, and pharmacy emerged during that time. The new health professions expanded, matured, and experienced educational evolutions similar to medicine. The second section describes the current state-of-affairs in clinical health professions education starting with its origins in Sir William Osler’s ideas about the natural method of teaching (i.e., experiential learning). The section proceeds to address problems with the status quo in clinical education including (a) uneven educational opportunities, (b) lack of rigorous learner evaluation and feedback, and (c) poor clinical practice outcomes. The third chapter section presents a call to action and advances new directions for clinical education in the health professions.
Historical Origins of Clinical Education
The history of clinical education in medicine has been traced from antiquity to the middle ages in the writings of Theodor Puschmann [18] and other scholars such as Henry Sigerist [19, 20]. These authors teach that clinical medicine in the ancient world in such places as Egypt, Mesopotamia, and India was taught using an apprenticeship model. Boys in early adolescence were selected and trained to be physicians often due to family tradition and primogeniture. The advent of European universities in the fourteenth- and fifteenth-century Berlin, London, Padua, Paris, Prague, Zurich, and other cities began to embed medical education in academic settings yet still relied on the apprenticeship for clinical training. Learning by doing was the medical education principle at that time despite the absence of a scientific foundation for medical practice.
The modern era of clinical medical education in North America and Western Europe has been chronicled by Kenneth Ludmerer [21, 22] and many other writers including James Cassedy [23], Paul Starr [24], and Molly Cooke and colleagues [25]. This historical scholarship addresses medical education events and trends from the mid-nineteenth century, including the US War between the States, to the early twentieth century. This work speaks to medical curricula and student evaluation acknowledging the primitive technologies that were available, judged by today’s standards. Historical medical education scholarship by Molly Cooke and colleagues [25] also attributes the importance of the Flexner Report [26], Medical Education in the United States and Canada, as a turning point to improve medical education standards by grounding professional education in university settings, enforcing rigorous admissions standards, emphasizing clinical science, and weeding out fly-by-night proprietary medical schools. By contrast, medical sociologist Paul Starr [24] downplays the watershed status of the Flexner Report. Starr argues that economic conditions, state licensing requirements, and other secular trends before and after publication of the Flexner report were the real reasons for medical education reform in the early twentieth century.
Similar historical conditions in clinical care and education were underway for other healthcare professions including nursing [9], dentistry [27], pharmacy [28], and physical therapy [29]. In the early twentieth century, all US healthcare professions were afloat on the same river—after classroom and laboratory instruction in the basic health sciences, clinical education was wholly experiential and based on chance encounters. At that time, little or nothing was said or known about novel clinical education technologies including systematic curriculum planning, formative and summative assessment, psychometric testing, problem-based learning (PBL), objective structured clinical examinations (OSCEs), standardized patients (SPs), simulation-based exercises, and DP that are now in widespread use.
Current State-of-Affairs in Clinical Education
The clinical education legacy of physician Sir William Osler and his Johns Hopkins School of Medicine colleagues has been described in detail elsewhere [6, 7]. In brief, Osler expressed his ideas about the best approach to clinical education for US doctors in a 1903 address to the New York Academy of Medicine titled, “The hospital as a college.” The talk was published later in Aequanimitas [30], a collection of his essays. Osler’s ideas about clinical education were shaped by his prior experience in Europe where he considered medical education to be far more advanced. Osler writes, “The radical reform needed is in the introduction into this country of the system of clinical clerks….” He continues, “In what may be called the natural method of teaching the student begins with the patient, continues with the patient, and ends his studies with the patient [emphasis added]. Teach him how to observe, give him plenty of facts to observe, and the lessons will come out of the facts themselves” [30].
William Halsted, a Johns Hopkins surgeon colleague, echoed Osler’s principles in a 1904 essay, “The training of the surgeon” [31]. Osler and Halsted argued that the clinical medical curriculum is embodied in patients. Medical historian Kenneth Ludmerer elaborates this position, “… house officers admitted patients by what might be termed the ‘laissez faire method of learning.’ Interns and residents received patients randomly … Medical educators presumed that, over time, on a large and active teaching service, house officers would be exposed to a sufficient volume and variety of patients to emerge as experienced clinicians” [22].
Drs. Osler and Halsted were considered visionary medical educators in their day. However, the clinical education model they championed is chiefly passive, active only in the sense that students encountered many patients. The Osler model has no place for today’s science of learning or science of instruction: structured, graded educational requirements; deliberate skills practice; objective formative and summative assessment with feedback; multimedia learning; accountability; and supervised reflection for novice doctors to master their craft [6, 7, 32, 33]. The Osler clinical curriculum tradition dominated twentieth-century medical education and continues into the twenty-first century.
The nineteenth-century model of clinical medical education is seen in 2020 as undergraduate clinical clerkships, postgraduate medical residency rotations, and subspecialty medical and nursing fellowships. Clinical learners participate in patient care without adequate supervision and with random clinical experiences as they advance in the curriculum. Clinical learners rarely receive feedback. Educational experiences are structured by time (days, weeks, or months) and location (clinical sites) [34]. Because of the reliance on this time-based model, learners are rarely engaged in planned and rigorous educational activities that address measured learning outcomes. There are few tests that really matter beyond multiple-choice licensure and specialty board examinations. Structural and operational expressions of Osler’s natural method of teaching are seen every day at medical schools, nursing schools, and residency and fellowship programs where traditional, “time-honored” educational practices like morning report (daily group discussions about a select patient’s diagnosis and treatment) and professor rounds (informal rounds where a senior clinician sees “interesting” patients with a group of residents and medical students) are routine, sustained, and valued. Foundation courses in nursing education fulfill a similar role. Yet these clinical education experiences designed over a century ago now operate in a complex healthcare environment where health professions education is often subordinate to patient care needs and financial incentives.
Osler’s natural method of teaching has been in place for over a century in clinical education among the health professions. The model worked well in the early twentieth century, especially at prestigious medical and health professions schools where patients were hospitalized for extended lengths of stay, medical and educational technology were very simple, and the faculty focus was solely on patient care and clinical service. However, the Osler model has limited utility today due to many competing clinical priorities, financial disincentives, and at least three educational flaws: (a) uneven educational opportunities, (b) lack of rigorous learner evaluation and feedback , and (c) poor clinical practice outcomes.
Uneven Educational Opportunities
Experiential medical education , a synonym for Osler’s natural method of teaching [30] and Ludmerer’s [22] laissez faire method of learning , is not a good way to structure and manage a medical student’s or resident’s educational agenda. On grounds of educational experience alone, student exposure to patient problems needs to be broad, deep, and engaging. It needs to be controlled, with evaluation and feedback, not left to chance.
A telling example of uneven educational opportunities is a surgical education study reported by Richard Bell and colleagues [35] that documented the operative experience of residents in US general surgery residency education programs. Surgery residency program directors graded 300 operative procedures A, B, or C using these criteria: A, graduating general surgery residents should be competent to perform the procedure independently; B, graduating residents should be familiar with the procedure, but not necessarily competent to perform it; and C, graduating residents neither need to be familiar with nor competent to perform the procedure. The actual operative experience of all US residents completing general surgery training in June 2005 was compiled, reviewed, and compared with the three procedural criteria.
The study results enlighten, inform, and address Osler’s natural method of teaching directly. Bell et al. [35] report:
One hundred twenty-one of the 300 operations were considered A level procedures by a majority of program directors (PDs). Graduating 2005 US residents (n = 1022) performed only 18 of the 121 A procedures, an average of more than 10 times during residency; 83 of the 121 procedures were performed on average less than 5 times and 31 procedures less than once. For 63 of the 121 procedures, the mode (most commonly reported) experience level was 0. In addition, there was significant variation between residents in operative experience for specific procedures.
The investigators conclude:
Methods will have to be developed to allow surgeons to reach a basic level of competence in procedures which they are likely to experience only rarely during residency. Even for more commonly performed procedures, the numbers of repetitions are not very robust, stressing the need to determine objectively whether residents are actually achieving basic competency in these operations.
These findings are reinforced by a nearly identical follow-up study published 4 years later by Malangoni and colleagues [36] that documented an increase in total operations performed by surgical residents. However, the operative logs of graduating surgery residents still showed a wide and uneven variation in practical experience with clinical cases. Many essential surgical procedures were neither performed nor practiced during residency education. This is strong evidence that Osler’s natural method of teaching, grounded solely in patient care experience, is insufficient to ensure the procedural competence of new surgeons. The authors conclude “…alternate methods for teaching infrequently performed procedures are needed” [36].
The Bell et al. [35] and Malangoni et al. [36] findings of very uneven, frequently nonexistent, clinical learning opportunities for surgeons in training are neither restricted to surgery nor unique to the present. Nearly four decades ago, Bucher and Stelling [37] documented via qualitative research the “randomness of rotation assignments for internal medicine residents.” Another 1970s observation was made by McGlynn and colleagues [38] that, “If left to chance alone, many residents do not in fact have an opportunity to manage patients with common problems such as coronary artery disease or to use common primary care medications such as insulin in their primary care practice…. The wide variety of clinical situations needed to catalyze the residents’ development of clinical judgment for primary care situations does not occur in many residents’ practices” [38].
Many other medical education research reports reinforce the idea that irregular clinical experience alone is not the pathway to clinical competence. A sample of three journal articles, beginning in the late 1970s, starts with “Physician profiles in training the graduate internist” [39]. This observational study of house-staff clinical practice found, “There was a fourfold difference in the total number of patient encounters, a twelvefold variation in average cost of ancillary services per patient visit, and more than a twofold variation in the average time spent per patient. …Range of variation was equally great in each year of training.” A contemporary expression of poor educational opportunities due to traditional clinical education is seen in the work of Peets and Stelfox [40] where “…over a 9-year period, the opportunities offered to residents to admit patients and perform procedures during ICU [intensive care unit] rotations decreased by 32% and 34%, respectively.” Other indictments of traditional clinical education in medicine report reduced resident “code blue” experience over a 6-year time span [41], “underexposure” of students at 17 US medical schools to essential bedside procedures and comfort in performing them [42], and a wide variation in the clinical and educational experience among pulmonary and critical care fellows due to the lack of a “common core” [43]. These and many other medical education studies document the power of inertia in today’s clinical education.
Unfortunately, these uneven educational opportunities lead to unsafe patient care when doctors graduate from residency or fellowship and are in clinical practice as attending physicians. For example, Birkmeyer and colleagues [44] rigorously evaluated the video-recorded surgeries of 20 attending bariatric surgeons in Michigan performing laparoscopic gastric bypass. This study showed significant variation in the surgical skills of these physicians with less skilled surgeons causing more operative complications. Barsuk and colleagues [45] evaluated the simulated central venous catheter (CVC) insertion skills of 108 attending emergency medicine, IM, and critical care physicians with significant CVC insertion experience. Less than 20% of these doctors were able to demonstrate competent skills measured by their ability to meet or exceed a MPS on a 29-item CVC insertion skills checklist. However, these senior attending physicians were supervising residents and inserting CVCs frequently in their hospitals.
This problem of uneven educational opportunities for learners in clinical settings due to patient encounters governed by chance is not unique to the medical profession. Leaders in nursing education are sounding a similar alarm by pointing out that despite its longevity, the traditional apprenticeship model of clinical education in nursing is now obsolete [46,47,48,49].
Traditional clinical education in the health professions, grounded in Osler’s natural method of teaching, provides variable and insufficient opportunities for learners to acquire knowledge, skills, and attributes of professionalism needed for competent practice. A much more systematic, carefully managed, and accountable approach to clinical education is needed.
Learner Evaluation and Feedback
Health professions students are typically evaluated in three ways after classroom and laboratory instruction in the basic sciences and advancement to clinical education settings: (a) objective tests of acquired knowledge, (b) objective structured clinical examinations (OSCEs) in several formats, and (c) subjective evaluations of clinical performance.
Objective tests of acquired knowledge are ubiquitous in the health professions. They have a long history, dating to the formation of the National Board of Medical Examiners in the United States in 1915 [50] and the rise of psychometric science in the early twentieth century [51]. These evaluations are usually administered via multiple-choice questions, may cover hundreds of test items, require many hours of testing time, and yield highly reliable data, whose scores are used to render high-stakes decisions about learner educational achievement and professional certification. The United States Medical Licensing Examinations (USMLE) Steps (except for the clinical skills section) fulfill these purposes for the US medical profession [52]. Similar examinations are now in place in the United States. for other health professions including nursing [53], dentistry [54], pharmacy [55], physical therapy [56], physician assistants [57], osteopathic medicine [58], and many other specialties.
Today’s tests of acquired knowledge in the health professions, now delivered in controlled, computer-based settings, are very sophisticated. The tests provide precise estimates of theoretical and factual learning among students, residents, and fellows in a variety of health sciences. Psychometric science has produced measurement methods and analytic technologies that are far ahead of other evaluation approaches used in health professions education [59].
Health professions learners receive norm-referenced feedback from objective tests of acquired knowledge often as a percentile rank in comparison with peers. This feedback is usually nonspecific. It does not pinpoint one’s knowledge-based strengths or weaknesses, only one’s relative standing among similar learners. Thus, norm-referenced feedback from acquired knowledge measurements cannot usually be used as a roadmap for improvement or as a pathway to boost one’s fund of knowledge in needed directions. In fact, Neely and colleagues [60] reported that USMLE scores had a negative association with the level of performance of PGY-3 IM residents measured by summative evaluations from faculty, peers, and patients. Another study showed USMLE test scores are not correlated with reliable measures of medical students’, residents’, and fellows’ skills in clinical examination, communication, and medical procedures [61].
The OSCE originated from the work of Ronald Harden at the University of Dundee in the United Kingdom in the 1970s [62]. Briefly, an OSCE is a measure of clinical skill acquisition and performance now used in a wide variety of health professions including medicine, nursing, and other specialties [63, 64]. The goal of an OSCE is to perform a rigorous, standardized assessment of a health professions student’s clinical skills, and sometimes theoretical knowledge, as a benchmark for professional school advancement or certification [65].
Health sciences students taking an OSCE rotate through a series of examination stations, usually of short duration (5–15 minutes). Each station probes student skill or knowledge at specific clinical competencies such as physical examination; history taking; communication with patients and their families; medical procedures; health promotion counseling; radiographic, telemetry, or other image interpretation; clinical reasoning; prescription writing; medication reconciliation; and many other challenges. OSCE assessments may involve SPs who play out scripted roles, simulations, analyses of biomedical specimens including blood and tissue samples, or entries and verification of record keeping systems like electronic health records. Learners respond to realistic clinical problems in an OSCE, either skill-based (e.g., suturing, chest compressions) or case-based (e.g., infant seizures). Performance is scored objectively using checklists or other measures that yield reliable data.
OSCEs in many variations, e.g., the mini-clinical evaluation exercise (mini-CEX) [66,67,68,69], are now almost everywhere among the health professions. Their focus on measuring clinical skill acquisition and providing feedback to clinicians in training has had a palpable impact on health professions education. The Association of American Medical Colleges [70], for example, reports that the percentage of US medical schools that require students to undergo a final SP/OSCE examination before graduation has increased from 87% in academic year 2006–2007 to 91% in 2014–2015. In the same 9-year time span, the percentage of US medical schools that require passing a final SP/OSCE examination increased from 58% to 74%. Thus, while nearly all US medical students experience a summative OSCE, a much smaller percentage of students must perform to a high standard on a summative OSCE.
Creation and management of OSCEs in health professions education settings is labor intensive. An OSCE must have a sufficient number of stations (usually about 12), trained and calibrated raters, meaningful MPSs for individual stations and the total test, and consistent SPs to yield reliable data that are useful for making educational decisions [71]. Such conditions require dedication and hard work but can be reached in most educational settings.
Subjective student and resident evaluations are also ubiquitous in the health professions but address learning processes and outcomes that are different from knowledge acquisition [72]. Learning processes and outcomes evaluated subjectively typically involve faculty perceptions of clinical skills and attributes of professionalism that include interpersonal and communication skills, teamwork, procedural competence, altruism, clinical judgment, and efficiency. These subjective evaluations of clinical learners are made by experienced, but not necessarily trained, educational supervisors. The supervisor’s evaluations of students are usually recorded on rating scales ranging from poor to excellent performance. Subjective learner evaluations in the health professions are intended to complement objective measures of knowledge acquisition, and clinical skills assessment via OSCEs, to present a broad picture of student readiness to practice professionally.
There is a downside to subjective faculty evaluations of student clinical fitness. The problem is that decades of research shows that faculty ratings of student clinical performance are subject to many sources of bias and error that reduce the utility of the assessments [73]. Examples are plentiful. To illustrate, nearly four decades ago sociologist Charles Bosk [74] wrote in Forgive and Remember: Managing Medical Failure that senior surgeons’ subjective evaluations of junior trainees were highly intuitive, impressionistic, and focused more on learner character than on technical skill. Jack Haas and William Shaffir [75] cited many years ago the “ritual evaluation of competence” embodied in clinical evaluation schemes where learners engage in active “impression management” to influence supervisors’ evaluations. These and many other studies reported over the past 40 years point out that the quality, utility, and validity of clinical ratings of health professions students, residents, and fellows are in doubt. Rigorous, standardized, and generalizable measures of clinical competence are needed.
Contemporary writing about subjective evaluations of health professions learners by faculty in clinical settings continue to testify about flaws in this approach. Physician Eric Holmboe is an outspoken critic of faculty observations as an approach to evaluate clinical skills among medical trainees. There are two reasons for Holmboe’s criticism: (a) “the biggest problem in the evaluation of clinical skills is simply getting faculty to observe trainees” [76] and (b) “current evidence suggests significant deficiencies in faculty direct observation evaluation skills” [77]. A similar situation has been reported about clinical evaluations of nursing students where “questioning students to assess their grasp of their assigned patients’ clinical status” occurs rarely [47]. Thus, subjective observational evaluations of learner clinical skills in the health professions are flawed due to sins of omission and sins of commission.
In summary, current approaches used to evaluate achievement among learners in the health professions—tests of acquired knowledge, OSCEs, and subjective evaluations of clinical performance—provide an incomplete record of readiness for clinical practice among learners. Evaluation data are also used infrequently to give learners specific, actionable feedback for clinical skill improvement. Standardized knowledge tests typically yield very reliable data that can contribute to a narrow range of decisions about learner clinical fitness. Evaluation data derived from OSCEs and especially subjective observations tend to be much less reliable and have low or little utility for reaching educational decisions. Consequently, many programs of health professions education fall short of Holmboe’s admonition, “Medical educators have a moral and professional obligation to ensure that any trainee leaving their training program has attained a minimum level of clinical skills to care for patients safely, effectively, and compassionately” [77].
Clinical Practice Outcomes
Osler’s natural method of teaching, expressed as experiential clinical learning in the health professions, has been the educational mainstay for over a century. The problem is that longitudinal clinical education without a competency focus, rigorous evaluation, detailed feedback, tight management, and accountability does not work very well.
Published evaluation studies about clinical skill acquisition among medical learners who were educated traditionally reveal consistent, concerning results. There are many examples.
To illustrate, a 3-year study conducted in the 1990s involved objective evaluations of 126 pediatric residents. The residents failed to meet faculty expectations about learning basic skills such as physical examination, history taking, laboratory use, and telephone patient management as a consequence of education based solely on clinical experience [78]. Other studies report that residents and students who only receive experiential learning acquire very weak ECG interpretation skills [79,80,81] and are not ready for professional practice. Another line of medical education research documents skill and knowledge deficits among medical school graduates about to start postgraduate residency education at the University of Michigan. These studies report that skill and knowledge deficits include such basic competencies as interpreting critical laboratory values, cross-cultural communication, evidence-based medicine, radiographic image interpretation, aseptic technique, advanced cardiac life support, and cardiac auscultation [82, 83].
A recent study conducted under auspices of the American Medical Association reports, “One hundred fifty-nine students from medical schools in 37 states attending the American Medical Association’s House of Delegates Meeting in June 2015 were assessed on an 11-element skillset on BP measurement. Only one student demonstrated proficiency on all 11 skills. The mean number of elements performed properly was 4.1. The findings suggest that changes in medical school curriculum emphasizing BP measurement are needed for medical students to become, and remain, proficient in BP measurement. Measuring BP correctly should be taught and reinforced throughout medical school, residency, and the entire career of clinicians” [84].
Traditional undergraduate clinical education in medicine, grounded chiefly in patient care experience, has failed to produce young doctors who are ready for postgraduate education in a medical specialty. A recent survey of medicine residency program directors shows that, “a significant proportion of [new] residents were not adequately prepared in order filling, forming clinical questions, handoffs, informed consent, and promoting a culture of patient safety” [85]. Survey research results in surgical education paint a similar picture. A 2017 multi-institution surgical education study under auspices of the Procedural Learning and Safety Collaboration (PLSC) concluded that “US GS (general surgery) residents are not universally ready to independently perform the most common core procedures by the time they complete residency training. Significant gaps remain for less common core and non-core procedures” [86]. Other reports have spawned the growth of “boot camp” clinical education crash courses designed to better prepare new physicians for patient care responsibilities they will face as residents [87,88,89,90,91,92,93,94].
The weight of evidence is now very clear that traditional clinical education in medicine and other health professions, mostly based on clinical experience, is simply not effective at producing competent practitioners. The conclusion is evident: there is an acute need to modernize health professions education to match expectations expressed by the National Academy of Sciences, Engineering, and Medicine [5], “… [health professions] educators should ensure that curricula and training programs across the career trajectory employ educational approaches that are aligned with evidence from the learning sciences.”
New Directions for Clinical Education
The premise of this chapter is that clinical education in the health professions is not standardized and is ineffective. It is based on an obsolete model about the acquisition of knowledge, skill, and professionalism attributes grounded chiefly in clinical experience that has not kept up with the rapidly changing healthcare environment. Today, unmanaged clinical experience alone is insufficient to ensure that nurses, physicians, physical therapists, pharmacists, dentists, midwives, and other health professionals are fit to care for patients.
The weakness of traditional clinical education is especially evident in comparison to new education approaches like simulation-based education with deliberate practice . In medicine, for example, this has been demonstrated in a systematic, meta-analytic, head-to-head comparison of traditional clinical education versus simulation-based medical education (SBME) with DP [15]. Quantitative aggregation and analysis of 14 studies involving 633 medical learners shows that without exception SBME with DP produces much better education results than clinical experience alone (Fig. 1.2). The effect size for the overall difference between SBME with DP and traditional clinical education is expressed as a Cohen’s d coefficient = 2.00 [7]. This is a huge difference, a magnitude never before reported in health professions education comparative research.
There are at least five new directions for clinical education in the health professions that warrant attention: (a) focus on the learning sciences; (b) active learning; (c) deliberate practice, (d) rigorous, reliable measurement with feedback; and (e) mastery learning.
Learning Sciences
Psychologist Richard Mayer [32] separates the science of learning from the science of instruction. The science of learning seeks to understand how people learn from words, pictures, observation, and experience—and how cognitive operations mediate learning. The science of learning is about acquisition and maintenance of knowledge, skill, professionalism, and other dispositions needed for clinical practice. The science of instruction, by contrast, “is the scientific study of how to help people learn” [32]. Health professions educators need to be conversant with both the science of learning and the science of instruction to plan and deliver educational programs that produce competent and compassionate clinicians.
There are, in fact, a variety of learning sciences that find homes for application in health professions education. A detailed description of the various learning theories is beyond the scope of this chapter (but see Chap. 2). Many scientists too numerous to fully name or credit here have sought to deepen our understanding of human learning in the health professions via empirical and synthetic scholarship. Several select, yet prominent, examples of learning sciences include behaviorism [95], cognitive load theory [96], constructivism [97], problem-based learning [98], and social cognitive theory [99]. Many other illustrations addressing different scientific perspectives could be identified.
The important point is that health professions educators need to make better use of current learning sciences knowledge, in addition to advancing the learning sciences research agenda, as education programs in the health professions are designed and maintained.
Active Learning
A meta-analysis of 225 science education research studies published in the Proceedings of the National Academy of Science [33] shows unequivocally that active learning—in-class problem-solving, worksheets, personal response systems, and peer tutorials—is far superior than passive learning from lectures to achieve student learning goals. The authors assert, “The results raise questions about the continued use of traditional lecturing as a control in research studies, and support active learning as the preferred, empirically validated teaching practice in regular classrooms.” The lesson is that health science learners need to be actively engaged in professionally relevant tasks to grow and strengthen their competence. Passive learning strategies such as listening to lectures or watching videos are much less effective.
Deliberate Practice
Deliberate practice is a construct coined and advanced by psychologist K. Anders Ericsson and his colleagues [95, 100,101,102,103,104]. The Ericsson team sought to study and explain the acquisition of expertise in a variety of skill domains including sports, music, writing, science, and the learned professions including medicine and surgery [102]. Rousmaniere [105] has extended this work to education for professional psychotherapists. The Ericsson team’s research goal was to isolate and explain the variables responsible for the acquisition and maintenance of superior reproducible (expert) performance. Ericsson and his colleagues found consistently that the origins of expert performance across skill domains do not reside in measured intelligence, scholastic aptitude, academic pedigree, or longitudinal experience. Instead, acquisition of expertise stems from about 10,000 hours of DP depending on each specific skill domain.
Ericsson writes that his research group:
…identified a set of conditions where practice had been uniformly associated with improved performance. Significant improvements in performance were realized when individuals were (1) given a task with a well-defined goal, (2) motivated to improve, (3) provided with feedback, (4) provided with ample opportunities for repetition and gradual refinements of their performance. Deliberate efforts to improve one’s performance beyond its current level demands full concentration and often requires problem-solving and better methods of performing the tasks [101].
Deliberate practice in health professions education means that learners are engaged in planned, difficult, and goal-oriented work, supervised and coached by teachers, who provide feedback and correction, under conditions of high achievement expectations, with revision and improvement to existing mental representations. Deliberate practice is the polar opposite of the natural method of teaching favored in Osler’s [30] day or even the more recent “laissez faire method of learning ” described by Kenneth Ludmerer [22].
Rigorous, Reliable Measurement with Feedback
The use of quality measures that yield highly reliable data is essential to provide learners with specific, actionable feedback to promote their improvement in knowledge, skill, and professionalism. Highly reliable assessment data have a strong “signal” with very little “noise” or error [106]. Reliable data are also needed to make accurate decisions about learner advancement decisions in educational programs. Educational quality improvement (QI) requires that the reliability of data derived from measurements and assessments should be checked regularly and improved as needed to ensure the accuracy and fairness of learner evaluations.
Over the past decade, a Northwestern University team of researchers completed a series of simulation-based (S-B) clinical skill acquisition programs that feature attention to learning science, active learning, deliberate practice, and mastery learning. A key to the success of these programs is constant QI attention to the reliability of outcome measurement data. A visible example of one such program, led by physician Jeffrey Barsuk, concerns training IM and emergency medicine residents on proper insertion of CVCs in a medical intensive care unit (MICU) with subsequent training of ICU nurses on CVC maintenance skills. In brief, the research program results demonstrate reliable measurement of CVC skills acquired in the simulation laboratory [107]. Downstream translational measured outcomes [108] also show that residents who received S-B training inserted CVCs in the MICU with significantly fewer patient complications than traditionally trained residents [109]. A before-after study in the MICU showed that the simulation-based educational intervention also produced a reliably measured 85% reduction in central line-associated bloodstream infections over 39 months [17]. S-B training also produced large improvements in ICU nurses’ CVC maintenance skills to a median score of 100% measured with high reliability [110].
There is no doubt about the importance of rigorous, reliable measurement with feedback to boost health professions education and translate into meaningful clinical outcomes.
Mastery Learning
Mastery learning , the theme of this book, aims to achieve “excellence for all” in health professions education. The basic idea is that any health professions curriculum—medicine, nursing, pharmacy, dentistry, etc.—is a sample of professional practice. Tests, evaluations, and examinations are a sample of the curriculum. The educational aim is to align learner evaluations with curriculum and professional practice goals, an alignment that will never be flawless.
Mastery learning requires that all learners achieve all curriculum learning objectives to high performance standards without exception. Educational outcomes are uniform among learners, while the time needed to reach the outcomes may vary. This is a radical departure from the traditional model of health professions education where learning time is fixed and measured learning outcomes vary, often distributed as a normal curve. The idea of mastery learning conforms with a medical education recommendation proposed by Cooke, Irby, and O’Brien in their book, Educating Physicians: A Call for Reform of Medical School and Residency [25], “Standardize learning outcomes and individualize learning processes.”
The time has come for a new model of clinical education in the health professions. We have relied for too long on time-based rotations for learners to acquire clinical skills and multiple-choice tests as proxy measures of clinical learning outcomes. The new model will complement, sometimes replace, traditional clinical education and will link classroom and learning laboratory measurements with downstream clinical impacts. Mastery learning will be the cornerstone of this new model of clinical education.
Coda
For all the reasons discussed in this chapter, current healthcare provider education simply does not work very well. The current model needs to be augmented by a new and improved training model that will complement clinical training and enhance education and downstream patient outcomes. We must move from time-based rotations and multiple-choice tests to routine and continuous assessments of actual clinical skills [111]. Chapter 2 of this book describes the mastery learning model in detail and provides examples of its utility in health professions education.
References
Barsuk JH, Cohen ER, Caprio T, McGaghie WC, Simuni T, Wayne DB. Simulation-based education with mastery learning improves residents’ lumbar puncture skills. Neurology. 2012;79(2):132–7.
Nathan BR, Kincaid O. Does experience doing lumbar punctures result in expertise? A medical maxim bites the dust. Neurology. 2012;79(2):115–6.
Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: the relationship between clinical experience and quality of health care. Ann Intern Med. 2005;142:260–73.
Barsuk JH, Cohen ER, Feinglass J, McGaghie WC, Wayne DB. Experience with clinical procedures does not ensure competence: a research synthesis. J Grad Med Educ. 2017;9:201–8.
National Academy of Sciences, Engineering, and Medicine. Improving diagnosis in healthcare. Washington, DC: The National Academies Press; 2015.
Issenberg SB, McGaghie WC. Looking to the future. In: McGaghie WC, editor. International best practices for evaluation in the health professions. London: Radcliffe Publishing, Ltd.; 2013. p. 341–59.
McGaghie WC, Kristopaitis T. Deliberate practice and mastery learning: origins of expert medical performance. In: Cleland J, Durning SJ, editors. Researching medical education. New York: John Wiley & Sons; 2015. p. 219–30.
Susskind R, Susskind D. The future of the professions: how technology will transform the work of human experts. New York: Oxford University Press; 2015.
National League for Nursing. Outcomes and competencies for graduates of practical/vocational, diploma, baccalaureate, master’s practice, doctorate, and research. Washington, DC: National League for Nursing; 2012.
Association of American Medical Colleges. Core entrustable professional activities for entering residency. Curriculum developer’s guide. Washington, DC: AAMC; 2014.
Holmboe ES, Edgar L, Hamstra S. The milestones guidebook. Chicago: Accreditation Council on Graduate Medical Education; 2016.
Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—rationale and benefits. N Engl J Med. 2012;366(11):1051–6.
Levine AI, DeMaria S Jr, Schwartz AD, Sim AJ, editors. The comprehensive textbook of healthcare simulation. New York: Springer; 2013.
Fincher R-ME, White CB, Huang G, Schwartzstein R. Toward hypothesis-driven medical education research: task force report from the Millennium Conference 2007 on educational research. Acad Med. 2010;85:821–8.
McGaghie WC, Issenberg SB, Cohen ER, Barsuk JH, Wayne DB. Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Acad Med. 2011;86:706–11.
Cohen ER, Feinglass J, Barsuk JH, Barnard C, O’Donnell A, McGaghie WC, Wayne DB. Cost savings from reduced catheter-related bloodstream infection after simulation-based education for residents in a medical intensive care unit. Simul Healthc. 2010;5:98–102.
Barsuk JH, Cohen ER, Feinglass J, McGaghie WC, Wayne DB. Use of simulation-based education to reduce catheter-related bloodstream infections. Arch Intern Med. 2009;169:1420–3.
Puschmann T. A history of medical education. New York: Hafner Publishing Co.; 1966. (Originally published 1891).
Siegerist HE. A history of medicine, vol. I: primitive and archaic medicine. New York: Oxford University Press; 1951.
Siegerist HE. A history of medicine, vol. II: early Greek, Hindu, and Persian medicine. New York: Oxford University Press; 1961.
Ludmerer KM. Learning to heal: the development of American medical education. Baltimore: Johns Hopkins University Press; 1985.
Ludmerer KM. Let me heal: the opportunity to preserve excellence in American medicine. New York: Oxford University Press; 2015.
Cassedy JL. Medicine in America: a short history. Baltimore: Johns Hopkins University Press; 1991.
Starr P. The social transformation of American medicine. New York: Basic Books; 1982.
Cooke M, Irby DM, O’Brien BC. Educating physicians: a call for reform of medical school and residency. Stanford: Carnegie Foundation for the Advancement of Teaching; 2010.
Flexner A. Medical education in the United States and Canada. Bulletin no. 4 of the Carnegie Foundation for the Advancement of Teaching. New York: Carnegie Foundation for the Advancement of Teaching; 1910.
Committee on the Future of Dental Education, Division of Healthcare Services, Institute of Medicine, Field MS, editor. Dental education at the crossroads: challenges and change. Washington, DC: National Academy Press; 1995.
Buerki RA, editor. Teaching the history of pharmacy today. Madison: American Institute of the History of Pharmacy; 1999. Retrieved from https://pharmacy.wisc.edu/sites/default/files/content/american-institute-history-pharmacy/resources-teaching/teachinghistpharm.pdf.
Murphy W. Healing the generations: a history of physical therapy and the American Physical Therapy Association. Alexandria: American Physical Therapy Association; 1995.
Osler W. The hospital as a college. In: Osler W, editor. Aequanimitas. Philadelphia: P. Blakiston’s Son & Co.; 1932. p. 313–25.
Halsted WS. The training of the surgeon. Bull Johns Hopkins Hosp. 1904;15:267–75.
Mayer RE. Applying the science of learning to medical education. Med Educ. 2010;44:543–9.
Freeman S, Eddy SI, McDonough M, Smith MK, Okoraoafor N, Jordt H, Wenderoth MP. Active learning increases student performance in science, engineering, and mathematics. PNAS. 2014;111(23):8410–5.
Holmboe E, Ginsburg S, Bernabeo E. The rotational approach to medical education: time to confront our assumptions? Med Educ. 2011;45:69–80.
Bell RH Jr, Biester TW, Tabuenca A, Rhodes RS, Cofer JB, Britt D, Lewis FR Jr. Operative experience of residents in U.S. general surgery programs: a gap between expectation and experience. Ann Surg. 2009;249:719–24.
Malangoni MA, Biester TW, Jones AT, Klingensmith ME, Lewis FA Jr. Operative experience of surgery residents: trends and challenges. J Surg Educ. 2013;70(6):783–8.
Bucher R, Stelling JG. Becoming professional. Beverly Hills: Sage Publications; 1977.
McGlynn TJ Jr, Munzenrider RF, Zizzo J. A resident’s internal medicine practice. Eval Health Prof. 1979;2(4):463–76.
Reid RA, Lantz KH. Physician profiles in training the graduate internist. J Med Educ. 1977;52:301–7.
Peets AD, Stelfox HT. Changes in residents’ opportunities for experiential learning over time. Med Educ. 2012;46:1189–93.
Mickelsen S, McNeil R, Parikh P, Persoff J. Reduced patient “code blue” experience in the era of quality improvement. New challenges in physician training. Acad Med. 2011;86(6):726–30.
Barr J, Graffeo CS. Procedural experience and confidence among graduating medical students. J Surg Educ. 2016;73(3):46–7.
Shah NG, Seam N, Woods CJ, Fessler HE, Goyal M, McAreavey D, Lee B. A longitudinal regional educational model for pulmonary and critical care fellows emphasizing small group and simulation-based learning. Ann Am Thorac Soc. 2016;13(4):469–74.
Birkmeyer JD, Finks JF, O’Reilly A, Oerline M, Carlin AM, Nunn AR, Dimick J, Banerjee M, Birkmeyer NJO. Surgical skill and complication rates after bariatric surgery. N Engl J Med. 2013;369(15):1434–42.
Barsuk JH, Cohen ER, Nguyen D, Mitra D, O’Hara K, Okuda Y, Feinglass J, Cameron K, McGaghie WC, Wayne DB. Attending physician adherence to a 29 component central venous catheter bundle checklist during simulated procedures. Crit Care Med. 2016;44:1871–81.
Gubrud-Howe P, Schoessler M. From random access opportunity to a clinical education curriculum. J Nurs Educ. 2008;47(1):3–4.
Ironside PM, McNelis AM. Clinical education in prelicensure nursing programs: findings from a national survey. Nurs Educ Perspect. 2010;31(4):264–5.
Niederhauser V, Schoessler M, Gubrud-Howe PM, Magnussen L, Codier E. Creating innovative models of clinical nursing education. J Nurs Educ. 2012;51(11):603–8.
Ironside PM, McNelis AM, Ebrigret P. Clinical education in nursing: rethinking learning in practice settings. Nurs Outlook. 2014;62(3):185–91.
Hubbard JP. Measuring medical education: the tests and the experience of the National Board of Medical Examiners. 2nd ed. Philadelphia: Lea & Febiger; 1978.
Monroe WS. An introduction to the theory of educational measurements. Boston: Houghton Mifflin; 1923.
Federation of State Medical Boards of the United States, Inc., and the National Board of Medical Examiners. Bulletin of information; 2016. Retrieved from http://usmle.org.
National Council of State Boards of Nursing. NCLEX examination candidate bulletin; 2016. Retrieved from https://www.ncsbn.org/1213.htm.
American Dental Association. Report of the ADA-recognized dental specialty certifying boards; 2015. Retrieved from http://www.ada.org/.
American College of Clinical Pharmacy. Board certification and recertification; 2016. Retrieved from http://www.accp.com/careers/certification.aspx.
American Physical Therapy Association. About the national physical therapy examination; 2016. Retrieved from http://www.apta.org/Licensure/NPTE?.
National Commission on Certification of Physician Assistants. Initial certification for physician assistants, PANCE Exam; 2016. Retrieved from http://www.nccpa.net/become-certified.
American Osteopathic Association. AOA board certification; 2016. Retrieved from https://www.osteopathic.org/inside-aoa/development/aoa-board-certification.
Clauser BE, Margolis MJ, Case SM. Testing for licensure and certification. In: Brennan RL, editor. Educational measurement. 4th ed. Westport: American Council on Education and Praeger Publishers; 2006. p. 701–31.
Neely D, Feinglass J, Wallace WH. Developing a predictive model to assess applicants to an internal medicine residency. J Grad Med Educ. 2010;2(1):129–32.
McGaghie WC, Cohen ER, Wayne DB. Are United States Medical Licensing Exam Step 1 and 2 scores valid measures for postgraduate medical residency selection decisions? Acad Med. 2011;86(1):48–52.
Harden RM, Stevenson M, Downie WW, Wilson GM. Assessment of clinical competence using objective structured examination. BMJ. 1975;1:447–51.
Rushforth HE. Objective structured clinical examination (OSCE): review of literature and implications for nursing education. Nurse Educ Today. 2007;27(5):481–90.
Sloan DA, Donnelly MB, Schwartz RW, Strodel WE. The objective structured clinical examination: the new gold standard for evaluating postgraduate clinical performance. Ann Surg. 1995;222(6):735–42.
Yudkowsky R. Performance tests. In: Downing SM, Yudkowsky R, editors. Assessment in health professions education. New York: Routledge; 2009. p. 217–43.
Norcini JJ, Blank LL, Arnold GK, Kimball HR. The mini-CEX (clinical evaluation exercise): a preliminary investigation. Ann Intern Med. 1995;123(10):795–9.
Behere R. Introduction of mini-CEX in undergraduate dental education in India. Educ Health. 2014;27(3):262–8.
Milner KA, Watson SM, Steward JG, NeNisco S. Use of mini-CEX tool to assess clinical competence in family nurse practitioner students using undergraduate students as patients and doctoral students as evaluators. J Nurs Educ. 2014;53(12):718–20.
Weijs CA, Coe JB, Hecker KG. Final-year students’ and clinical instructors’ experience of workplace-based assessments used in a small-animal primary-veterinary-care clinical rotation. J Vet Med Educ. 2015;42(4):382–92.
Association of American Medical Colleges. Number of medical schools requiring final SP/OSCE examination: 2006–2007 through 2010–2011; 2016. Retrieved from https://www.aamc.org/initiatives/cir/406426/9.html.
Brannick MT, Erol-Korkmaz HT, Prewett M. A systematic review of the reliability of objective structured clinical examination scores. Med Educ. 2011;45:1181–9.
McGaghie WC, Butter J, Kaye M. Observational assessment. In: Downing SM, Yudkowsky R, editors. Assessment in health professions education. New York: Routledge; 2009. p. 185–215.
Williams RG, Klamen DA, McGaghie WC. Cognitive, social, and environmental sources of bias in clinical performance ratings. Teach Learn Med. 2003;15(4):270–92.
Bosk CL. Forgive and remember: managing medical failure. 2nd ed. Chicago: University of Chicago Press; 2003.
Haas J, Shaffir W. Becoming doctors: the adoption of a cloak of competence. Greenwich: JAI Press; 1987.
Holmboe ES. Direct observation by faculty. In: Holmboe ES, Hawkins RE, editors. Practical guide to the evaluation of competence. Philadelphia: Mosby Elsevier; 2008. p. 119–29.
Holmboe ES. Faculty and the observation of trainees’ clinical skills: problems and opportunities. Acad Med. 2004;79(1):16–22.
Joorabchi B, Devries JM. Evaluation of clinical competence: the gap between expectation and performance. Pediatrics. 1996;97(2):179–84.
Pinkerton RE, Francis CK, Ljungquist KA, Howe GW. Electrocardiographic training in primary care residency programs. JAMA. 1981;246:148–50.
Boltri JM, Hash RB, Vogel RL. Are family practice residents able to interpret electrocardiograms? Adv Health Sci Educ. 2003;8:149–53.
Wilcox JE, Raval Z, Patel AB, Didwania A, Wayne DB. Imperfect beginnings: incoming residents vary in their ability to interpret basic electrocardiogram findings. J Hosp Med. 2014;9(3):197–8.
Lypson ML, Frohna JG, Gruppen LD, Woolliscroft JO. Assessing residents’ competencies at baseline: identifying the gaps. Acad Med. 2004;79(6):564–70.
Wagner D, Lypson ML. Centralized assessment in graduate medical education: cents and sensibilities. J Grad Med Educ. 2009;1:21–7.
Rakotz MK, Townsend RR, Yang J, et al. Medical students and measuring blood pressure: results from the American Medical Association blood pressure check challenge. J Clin Hypertens. 2017;19:614–9.
Perlmen RE, Pawelczak M, Yacht AC, et al. Program director perceptions of proficiency of the core entrustable professional activities. J Grad Med Educ. 2017;9(5):588–92.
George BC, Bohnan JD, Williams RG, Procedural Learning and Safety Collaborative (PLSC), et al. Readiness of US general surgery residents for independent practice. Ann Surg. 2017;266(4):582–94.
Laack TA, Newman JS, Goyal DG, Torsher LC. A 1-week simulated internship course helps prepare medical students for transition to residency. Simul Healthc. 2010;5:127–32.
Antonoff MB, Swanson JA, Green CA, Mann BD, Maddaus MA, D’Cunha J. The significant impact of a competency-based preparatory course for senior medical students entering surgical residency. Acad Med. 2012;87(3):308–19.
Cohen ER, Barsuk JH, Moazed F, Caprio T, Didwania A, McGaghie WC, Wayne DB. Making July safer: simulation-based mastery learning during intern boot camp. Acad Med. 2013;88:233–9.
Naylor RA, Hollett LA, Castellvi A, Valentine RJ, Scott DJ. Preparing medical students to enter surgical residencies. Am J Surg. 2010;199(1):105–9.
Nishisake A, Hales R, Bigas K, Cheifetz I, Corriveau C, Garber N, Hunt E, Jarrah R, McCloskey J, Morrison W, Nelson K, Niles D, Smith S, Thomas S, Tuttle S, Helfaer M, Nadkarni V. A multi-institutional high-fidelity simulation “boot camp” orientation and training program for first year pediatric critical care fellows. Pediatr Crit Care Med. 2009;10:157–62.
Reed T, Pirotte M, McHugh M, Oh L, Lovett S, Hoyt AE, Quinones D, Adams W, Gruener G, McGaghie WC. Simulation-based mastery learning improves medical student performance and retention of core clinical skills. Simul Healthc. 2016;11:173–80.
Salzman DH, McGaghie WC, Caprio T, Even E, Hufmeyer K, Issa N, Schaefer E, Trainor J, Wayne DB. Use of simulation-based capstone course to teach and assess entrustable professional activities to graduating medical students. Med Sci Educ. 2016;26:453–6.
Salzman DH, McGaghie WC, Caprio TW, et al. A mastery learning capstone course to teach and assess components of three entrustable professional activities to graduating medical students. Teach Learn Med. 2019;31(2):186–94.
Ericsson KA, Krampe RT, Tesch-Römer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev. 1993;100:363–406.
Leppink J, van Gog T, Paas F, Sweller J. Cognitive load theory: researching and planning teaching to maximize learning. In: Cleland J, Durning SJ, editors. Researching medical education. Oxford: Wiley Blackwell; 2015. p. 207–18.
Mann K, MacLeod A. Constructivism: learning theories and approaches to research. In: Cleland J, Durning SJ, editors. Researching medical education. Oxford: Wiley Blackwell; 2015. p. 51–65.
Norman GR, Schmidt HG. The psychological basis of problem-based learning: a review of the evidence. Acad Med. 1992;67(9):557–65.
Torre D, Durning SJ. Social cognitive theory: thinking and learning in social settings. In: Cleland J, Durning SJ, editors. Researching medical education. Oxford: Wiley Blackwell; 2015. p. 105–16.
Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79(10, Suppl):S70–81.
Ericsson KA. Deliberate practice and the acquisition of expert performance: a general overview. Acad Emerg Med. 2008;15:988–94.
Ericsson KA, Charness N, Feltovich PJ, Hoffman RR, editors. The Cambridge handbook of expertise and expert performance. New York: Cambridge University Press; 2006.
Ericsson K, Pool R. Peak: secrets from the new science of expertise. Boston: Houghton Mifflin Harcourt; 2016.
Ericsson KA, Shyte J, Ward P. Expert performance in nursing: reviewing research on expertise in nursing within the framework of the expert performance approach. Adv Nurs Sci. 2007;30(1):E58–71.
Rousmaniere T. Deliberate practice for psychotherapists: a guide to improving clinical effectiveness. New York: Routledge; 2017.
Axelson RD, Kreiter CD. Reliability. In: Downing SM, Yudkowsky R, editors. Assessment in health professions education. New York: Routledge; 2009. p. 57–73.
Barsuk JH, McGaghie WC, Cohen ER, Balachandran JS, Wayne DB. Use of simulation-based mastery learning to improve the quality of central venous catheter placement in a medical intensive care unit. J Hosp Med. 2009;4:397–403.
McGaghie WC. Medical education research as translational science. Sci Transl Med. 2010;2:19cm8.
Barsuk JH, McGaghie WC, Cohen ER, O’Leary KJ, Wayne DB. Simulation-based mastery learning reduces complications during central venous catheter insertion in a medical intensive care unit. Crit Care Med. 2009;37:2697–701.
Barsuk JH, Cohen ER, Mikolajczak A, Seburn S, Slade M, Wayne DB. Simulation-based mastery learning improves central line maintenance skills of ICU nurses. J Nurs Adm. 2015;45(10):511–7.
van der Vleuten CPM, Schuwirth LWT. Assessing professional competence: from methods to programmes. Med Educ. 2005;39:309–17.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
McGaghie, W.C., Barsuk, J.H., Wayne, D.B. (2020). Clinical Education: Origins and Outcomes. In: McGaghie, W., Barsuk, J., Wayne, D. (eds) Comprehensive Healthcare Simulation: Mastery Learning in Health Professions Education. Comprehensive Healthcare Simulation. Springer, Cham. https://doi.org/10.1007/978-3-030-34811-3_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-34811-3_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-34810-6
Online ISBN: 978-3-030-34811-3
eBook Packages: MedicineMedicine (R0)