Keywords

A skilled surgical practitioner requires a depth of cognitive knowledge, an appropriate surgical judgment, and an ability to act quickly but thoughtfully and when necessary in a decisive manner. The surgeon must have compassion and be a good communicator and must be perceptive and dedicated. Surgeons must also be skilled in the surgical craft to perform particular technical tasks which are often the centerpiece of the care of the surgical patient.

Wanzel, Ward, and Reznick [1]

Introduction

Clinical competence and demonstration of professional excellence are requisites for the majority of accreditation processes for the recognition or certification of skills worldwide. Embedded in many surgical residency programs is a competency-based curriculum that is designed to challenge, observe, and measure residents’ performance as they progress through their respective programs. Before accreditation is granted, surgical residents are expected to successfully demonstrate their proficiency to perform tasks set forth by their local program and its governing agencies. The development of educational objectives, curricula, and means of assessment is an ongoing process and a primary focus for surgical educators in ensuring the competence of surgical residents.

Defining Competence

Currently, there is no agreed-upon definition of medical competence in the literature; however, Epstein and Hundert (2002) have proposed that professional medical competence is “the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and community being served” [2]. Leach (2002) adds that skill acquisition is a developmental process, and that although insights may occur suddenly while training, competence develops over time and is “nurtured by reflection on experiences” [3], and as Epstein and Hundert suggest, competence is a habit [2]. “To be competent, residents must be involved enough to be accountable” [3].

Historical Significance

Medical systems worldwide have been charged with consistent reexamination of their educational programs and processes for licensing and accreditation purposes and to ensure that residents are equipped with the competencies required of them to practice safe, effective medicine and surgery in practice. There has been a recent paradigm shift in medicine that has gone from a generally unstructured apprenticeship model to a more practice-based systems approach [4].

In the late 1800s, Sir William Halstead established the modern American surgical residency program at Johns Hopkins School of Medicine, based on the German system of regimen and discipline and graded responsibility [1, 5]. Still, a single system that actually worked for each medical institution did not exist. In an effort to regulate medical training, the American Medical Association (AMA) commissioned Abraham Flexner, who in turn published the Flexner Report in 1910, a summary of his investigation of medical schools in the United States and Canada, which contained quality ratings of each institution. Flexner was critical of the process of medical education, citing that only 10 % of approximately 300 medical schools would be worth maintaining, but recommended that medical schools become affiliated with universities [4]. Throughout the years, due to slow advancement in the practice of medicine, growing skepticism among the public about the competence of doctors, and the increasing responsibilities of residents [6], in 1999 the Accreditation Council for Graduate Medical Education (ACGME) established their Outcomes Project to improve the quality of graduate medical education and focus on educational outcomes [4].

Based on their Outcomes Project, the ACGME mandated that, prior to accreditation, each resident’s performance be assessed across six competency domains: patient care, medical knowledge, practice-based learning and improvement, professionalism, interpersonal and communication skills, and systems-based practice [7]. The American Board of Medical Specialties (ABMS), including the American Board of Surgery (ABS) and the American Board of Urology (ABU), now must prove that certification and competence are interrelated, when the assumption in the past has been that, intuitively, this relationship already exists, without ever having a concrete process for demonstrating it [8].

Alongside the work in the United States, the Canadian competency initiative, known as the CanMEDS framework (Canadian Medical Education Directives for Specialists), was created in 1996 by the Royal College of Physicians and Surgeons in Canada (RCPSC) [9] out of a perceived need to improve the medical training process across Canada. This reform was initiated due to a shift in societal expectation, which generated questions about patient consumerism, patient safety, quality of care, technological advances, fiscal constraint, government regulation, physician competence, and maintenance of training [10, 11]. The most recent CanMEDS (2005) framework measures seven core competencies: medical expert, communicator, collaborator, manager, health advocate, scholar, and professional. The CanMEDS program has been adopted in at least 17 jurisdictions worldwide (including European and Asian countries) and been used in the frameworks of at least eight professions [12].

In the United Kingdom, the Foundation Programme curriculum [13], updated in 2007, focuses on measuring seven competencies: good clinical care, maintaining good medical practice, teaching and training, relationships with patients and communication, working with colleagues, professional behavior and probity, and acute care.

The CanMEDS Framework

When comparing the CanMEDS, ACGME, and UK-based competency frameworks (Table 98.1), it is clear that similarities exist across many of the core competencies. The means of assessing these competencies are also somewhat comparable. However, because of the widespread use of the CanMEDS framework [12], it will be used here as the primary example in describing the assessment and measurement of competency. The structure of each core competency (also called a “role”) contains a brief definition, a more detailed description of the role, a list of key competencies, and then a section entitled “enabling competencies,” which takes each of the key competencies and provides a very detailed list of what the physician should be able to do to satisfy each key competency. Following are the seven core competencies and their definitions [9]:

Table 98.1 CanMEDS, ACGME, and UK Foundation Programme core competencies
  1. 1.

    Medical Expert

    As medical experts, physicians integrate all of the CanMEDS roles, applying medical knowledge, clinical skills, and professional attitudes in their provision of patient-centered care. Medical expert is the central physician role in the CanMEDS framework.

  2. 2.

    Communicator

    As communicators, physicians effectively facilitate the doctor-patient relationship and the dynamic exchanges that occur before, during, and after the medical encounter.

  3. 3.

    Collaborator

    As collaborators, physicians effectively work within a health-care team to achieve optimal patient care.

  4. 4.

    Manager

    As managers, physicians are integral participants in health-care organizations, organizing sustainable practices, making decisions about allocating resources, and contributing to the effectiveness of the health-care system.

  5. 5.

    Health Advocate

    As health advocates, physicians responsibly use their expertise and influence to advance the health and well-being of individual patients, communities, and populations.

  6. 6.

    Scholar

    As scholars, physicians demonstrate a lifelong commitment to reflective learning, as well as the creation, dissemination, application, and translation of knowledge.

  7. 7.

    Professional

    As professionals, physicians are committed to the health and well-being of individuals and society through ethical practice, profession-led regulation, and high personal standards of behavior.

The CanMEDS framework is designed to be applied to multiple specialties; however, it does not specifically address surgical competencies, as these are thought to be part of the hidden curriculum within the competencies in the “medical expert” role [14]. Although the CanMEDS framework has been distributed and used widely, the uptake into practice has been somewhat slow. Mickelson and MacNeily [10] suggest that this could be due to barriers such as a lack of understanding of what the core competencies actually represent, a lack of tools available to teach them, and an inability to quantify residents’ performance. Further, they believe that urology programs have struggled to incorporate the competencies into existing curricula, and that urology faculty may be unsure how to assess performance [10]. This is especially true when it comes to the assessment of technical skills. In an attempt to assist program directors and educators in the assessment process, the ACGME Outcomes Project included a “toolbox” for instruction and assessment methods for their competencies [15], and CanMEDS has produced the assessment tools handbook [16]. Both provide methods for assessing and measuring competencies in each domain, and some overlap does exist between publications with respect to assessment methods.

Assessment and Measurement of Competencies

Background

Assessment strategies vary based on the purpose of the test or action being evaluated. Some tests may be used to provide feedback to trainees, some assess a trainee’s ability to progress to the next level of training, while others will be used to issue licenses to practice medicine or to certify specialty competence [1]. Tests can be cognitive-based (knowledge) or technical-based (skills). In 2002, Wanzel and colleagues asserted that there are two main categories of assessment: formative and summative [1].

Formative assessment is conducted primarily to provide constructive feedback to residents and focuses on individual progress. It measures progress toward meeting objectives and helps identify those who require additional assistance and instruction. Summative assessment findings are designed to accumulate all relevant information in order to make the decision to pass or fail a resident. These are used to determine whether a resident qualifies to continue on to the next level of training, whether he or she should be dropped from the residency program, or if the resident should be recommended for board certification.

Cutoff scores for determining pass or fail are determined either by comparison to peers in norm-referenced standard settings or comparison to objectives or test content in criterion-referenced standard settings.

When assessing residents, the tools used to assess performance must be valid and reliable measures. Reliability ensures that the test is reproducible, and results would be expected to be the same upon repeated administration to the same person. The notion of validity is more complex than that of reliability, as multiple factors play into determining whether the interpretations made (based on test scores) are valid. These include measures of construct, criterion, and content validity, all of which are important to consider when determining if the test is measuring what it sets out to measure. Each of the assessment measures described in this chapter have been found to be reliable and valid; although as with any test, it is important to consider that certain biases introduced during the assessment process may jeopardize reliability and validity. In the next section, we briefly describe types of assessments and how these are incorporated into examination programs.

Assessment Measures

Written Examinations: Short-Answer (SAQ), Multiple-Choice (MCQ), and Open-Ended Questions

Written exams with short-answer, multiple-choice, and open-ended questions evaluate factual knowledge and abstract problem solving [17]. These exams are one of the most effective means for evaluating medical knowledge and are widely used [15]. These types of questions are often used along with other measures as part of an examination program.

Objective Structured Clinical Examinations (OSCE)

The OSCE is a performance-based multi-station clinical examination developed in 1975 by Harden and colleagues and is widely used in residency competency assessment. The OSCE uses direct observation (by a physician-examiner) with a criteria assessment technique [1]. Each station is between 5 and 10 min in length and has a different examiner. OSCEs generally use standardized patients to simulate clinical scenarios. A global score is used to determine if the candidate either passes or fails the station.

360-Degree Evaluations

The 360-degree evaluations use measurement tools completed by multiple people in a resident’s sphere of influence [10]. These evaluators can include physicians from other specialties who have worked with the resident, nurses, allied health-care workers, patients, the senior resident evaluating a junior resident and vice versa, and a senior resident evaluating a medical student and vice versa.

The United States Medical Licensing Examination (USMLE)

The USMLE assesses the ability to apply knowledge, ­concepts, and principles and to demonstrate fundamental patient-centered skills that are important in health and disease and that constitute the basis of safe and effective patient care. This is a three-step examination to determine whether medical students possess the medical knowledge sufficient to become a physician [18]. It is unclear whether USMLE results can predict who will become a proficient surgeon, but studies are looking at this potential correlation.

The Medical Council of Canada Qualifying Examination (MCCQE)

The MCCQE (part I) is a one-day, computer-based test that assesses the competence (knowledge, clinical skills, and attitudes) of candidates who have obtained their medical degree, for entry into supervised clinical practice in ­postgraduate training programs. The MCCQE (part II) is a three-hour OSCE that assesses the competence of candidates—specifically the knowledge, skills, and attitudes essential for medical licensure in Canada—prior to entry into independent clinical practice [19]. Competencies assessed by the MCCQE are comparable to those measured by the USMLE.

Urology-Specific Assessments: American Urology Association In-Service Examination (AUA ISE), the Queen’s Urology Examination Skills Training (QUEST), and the European Board of Urology (EBU) In-Service Assessment

The American Urology Association provides North American urology programs the opportunity for their residents to take the AUA ISE (MCQ test), which provides a yearly review of a residents’ performance when compared with other North American cohorts. In Canada, in addition to the AUA ISE, toward the end of resident training, the QUEST program serves to assess knowledge and practice via the use of SAQs and OSCEs. Both the AUA ISE and the QUEST performance correlate well with results on RCPSC certifying examinations [10]. The ACGME Residency Competency Evaluation System-Urology [15] also provides a Web-based system of competency evaluations for urology residents that include the Global Resident Competency Rating Form, 360° Rating Form, Operative Performance Rating Form, and the Observed Patient Encounter Rating Form.

Mickelson and MacNeily have proposed a summary of potential instruction methods and assessment strategies for urology residents, based on the CanMEDS competencies [10]. For each core competency, or role, the authors suggest certain instruction methods that may help residents achieve competence, followed by different assessment strategies (some of which are not noted above) for each role. Table 98.2 is an adaptation of the authors’ summary and may be very helpful for curriculum design and program implementation.

Table 98.2 Instruction methods and potential assessment strategies for urology residents using the CanMEDS framework

The European Board of Urology (EBU) administers annual in-service examinations to residents as part of the European Urology Residency Curriculum. As a section of the European Union of Medical Specialists (UEMS), the EBU collaborates with the European Association of Urology (EAU) to promote high-quality urological training and assessment. The in-service assessment is designed to provide feedback to program directors and residents about training progress and to identify deficiencies in areas that require further study. Residents’ results are compared with those of other trainees with the same level of experience (duration of training). The assessment allows residents to test their theoretical knowledge and clinical competence across 23 main urology subjects. It consists of 100 MCQs covering all fields of urology and can be completed either online or via a written/paper assessment. It must be completed within 2 h time. The EBU recommends that practicing urologists also take these annual assessments as part of their continuing medical education (they are assigned a separate peer group) [20].

Technical Skills Competency Assessment

With the changes and advances in surgical techniques, tools, and technologies over time, surgical educators are faced with multiple challenges when ensuring residents’ technical skills competency. In urology, as with other specialties, the achievement of the objectives of training is hampered by various factors, such as limited hours in the workweek, regulations on the necessity for attending surgeon participation in all procedures, pressure on faculty surgeons to increase their productivity, operating room costs, and an increased awareness in the general public of medical errors [21]. For the most part, technical skills assessment is subjective and therefore may not always be highly reliable in terms of assessing competency. The development of the Objective Structured Assessment of Technical Skills (OSATS) aimed to reduce the variability among assessments and has proven to increase reliability and validity of the assessment process [22].

Objective Structured Assessment of Technical Skills (OSATS)

Martin and colleagues (1995) [23] developed the OSATS as a tool to ensure technical competence of graduates of surgical programs [24]. The OSATS consists of multiple (usually 6–8) individual tasks that the resident performs over a 90-min period. A qualified surgeon assesses each task, using two separate marking methods: a task-specific checklist and a global rating scale. Each task is measured (Was it done effectively?), and when using the global rating scale, seven general operative competencies are rated on a five-point Likert scale.

Fundamentals of Laparoscopy Program (FLS)

The Fundamentals of Laparoscopy Program (FLS) was developed to address the need for educating surgeons on the underlying principles and basic skills of laparoscopic surgery and also to attend to the growing demand to document competency in surgical practice [25]. The Society of American Gastrointestinal and Endoscopic Surgeons (SAGES) and the FLS committee members have developed a technical skills curriculum specifically designed for use in residency training programs. This is a proficiency-based curriculum, and trainees are oriented to the materials and self-practice until expert-derived performance levels are reached. The FLS has partnered with ongoing national (US) initiatives, making this curriculum a part of the Basic and Advanced Laparoscopic Skills Modules included in the American College of Surgery (ACS) and Association of Program Directors in Surgery (APDS) National Skills Curriculum Project [26]. The overall goal of the FLS program is to teach a standard set of cognitive and psychomotor skills to practitioners of laparoscopic surgery to ensure a minimum standard of care for all patients undergoing laparoscopies [25]. The McGill Inanimate System for Training and Evaluation of Laparoscopic Skills (MISTELS) was developed as an educational instrument to train and evaluate basic laparoscopic skills outside of the operating room and is used in the FLS [27]. Dauster and colleagues (2005) studied the MISTELS with urology residents and determined that it has construct validity and therefore supports the use of MISTELS in evaluating urology residents in fundamental laparoscopic skills [27]. Studies are currently being conducted that assess the value of MISTELS in training urology residents to perform laparoscopy. The FLS assessment component is a two-part, proctored exam that covers cognitive knowledge and manual skills and is designed to test the understanding and application of the basic fundamentals of laparoscopy with emphasis on clinical judgment and intraoperative decision making [26]. The FLS can be taken and evaluated at the levels of resident and fellow, and is also available for surgeons who wish to complete it for continuing education purposes. As of 2010, the American Board of Surgery (ABS) requires all surgeons undertaking the certification process to pass the FLS Program [28].

To our knowledge, a competency assessment measure has yet to be developed and tested with regard to technical skills at the high-stakes level. Although the need for this has been clearly demonstrated in the literature, an appropriate test has not been devised. There is an ever-increasing necessity for further development in this area.

Conclusion

Ensuring the competency of surgical residents as they progress through training is essential to the development of the resident as a surgeon, favorable patient outcomes, and the success of residency programs. A competency-based curriculum is designed to measure the proficiency of its trainees and provides a framework for ensuring that residents are able to meet the standards required for accreditation. The core competencies that comprise the CanMEDS, ACGME, and the UK Foundations Programme frameworks can be integrated into a surgical residency curriculum to provide guidelines for measuring residents’ competency in multiple domains associated with a given specialty. Multiple tools exist to assess certain competencies and can provide for formative or summative evaluations. Established tools have been found to be both reliable and valid, although additional research regarding the relationship between these measures and competence is warranted. Additional research and development of technical skills assessment measures for use at the high-stakes level is necessary. As surgery continues to advance, society and other key stakeholders will need assurance that surgeons are competent to practice in a safe and sound manner.