INTRODUCTION

In a February (2018) New York Times commentary titled, “The Misguided Drive to Measure ‘Learning Outcomes’,” Molly Worthen, a university history professor, rankled the academic assessment community by lamenting the degeneration of an otherwise reasoned, well-meaning endeavor into an administrative obsession with record-keeping.1 The issue, she contends, is how an initial focus on educational quality gets sidetracked by competing institutional interests intent not on improving student learning1—but on maximizing efficiency and documenting outcomes: By bureaucratic inertia, a learner-centered commitment morphs into an organization-centered requirement.

Since the watershed Flexner report,2 the medical profession has shouldered a weighty responsibility to ensure a quality “product.” Yet, compared with medical care,3 such explicit emphasis on quality in medical education is somewhat more recent. Indeed, Worthen’s concerns aptly reflect parallel developments in undergraduate medical education (UME), where the Liaison Committee on Education (LCME)—the primary accrediting body for North American allopathic medical schools—now mandates that all MD degree–granting programs engage in continuous quality improvement (CQI) processes which, in part, “result in the achievement of measurable outcomes that are used to improve programmatic quality”.4

Implicated in these changes is the escalating rationalization of medical education. At a macro-structural level, rationalization is the “evolution” of modern society toward objective, empirically-driven thought and action to maximize control and minimize unpredictability. Closer to home, in UME, it reflects the desire to quantify, monitor, and manipulate educational outcomes and processes.

As a result, the volume and detail of educational documentation, reporting, and review appears headed to unprecedented levels. Many programs now include dedicated “quality” offices—often with accreditation officers and data analysts processing steady streams of metrics organized around specific standards, curricular stages, or stakeholder interests. Competency-based assessment, educational milestones, and entrustable professional activities (EPAs) comprise the new training vernacular once featuring the “see one, do one, teach one” model. Artifacts, benchmarks, and rubrics dominate the corresponding assessment dialogue. Reminiscent of transformations in patient care,5 forces are actively reshaping what we do as medical educators.

UME today is intensely deliberate, purposefully focused, and prospectively apportioned: No curricular time is left uncharted; little instructional effort remains undefined. An intricate web of learning objectives, all mapped to discrete competencies and/or EPAs, is progressively nested within sessions, courses, curricula, and programs—all of which link skyward to an organizational strategic plan. Empirical evidence is the unabashed coin of the realm: No outcome = no learning. A “tyranny of relevance” dictates educational content.6

This commentary discusses forces underlying this increasing rationalization in UME and their potential impact on meaningful, sustained efforts to improve the quality of undergraduate medical curricula—now a formal accreditation requirement. Practical suggestions are offered which may help legitimize CQI and offset the resistive effects of rationalization.

DISCUSSION

Rationalization

Rationalization, broadly speaking, is the gradual replacement of traditions, values, and emotions by objective, calculated motivators of behavior.7 This modern “evolution” occurs when social institutions which once held sway with magic, mysticism, and religion give way to “rational” structures that compel individuals to act predictably, reasonably, and efficiently8—a process which, when permeating all aspects of social life, has been dubbed “McDonaldization”.9 Medicine and health care, too, have moved from an era of trust and prerogative to one focused on accountability, scrutiny, measurement, and incentives.10

Directly or indirectly, various forces have seeded rationalization in UME. First, the erosion of trust in doctors and the medical profession11—much of it associated with the rise of bureaucratic medicine12—has heightened public demands for transparency and accountability. Universities, challenged to defend the cost and value of a college education, have felt similar pressure to demonstrate a commitment to quality13, 14 and dispel consumerist connotations.6, 15 Many medical schools, following public demands on health care providers to share clinical outcomes,16 now provide data which, despite limitations,17, 18 remain a widely-referenced source of educational quality.19, 20

Second, initially evidenced in the “proletarianization”21 and “deprofessionalization”22 of the physician workforce, but recently extended to biomedical achievements,23 UME shows increasing signs of commodification. Lacking a functional alignment of academic missions,24 the translation of teaching effort into objective, measurable “value units” is one strategy to incentivize education alongside research and patient care.25 In academic medical centers, economic pressures on the current practice environment threaten to further heighten tensions between clinical and subsidized UME interests.26

Lastly, to incorporate increasing volumes of educational content (including, somewhat ironically, CQI27), UME is compelled to maximize pedagogical “return on investment” by optimizing the relevance, flow, delivery, and “connectedness” of content.6, 28 With accreditation standards also requiring a careful accounting of educational time and intent, the precise measurement and control of increasingly minute curricular details appear destined to continue.

While self-regulation remains a defining characteristic of professions,29 and rationalization of the educational process will undoubtedly have unanticipated consequences, the heightened level of scrutiny it affords need not be detrimental. How, then, can CQI facilitate positive, meaningful change without being viewed as an administrative mandate or usurped by rational, organizational self-interest? It is this topic on which the remainder of the commentary will focus.

Continuous Quality Improvement

Although often used interchangeably, key differences exist between quality assurance (QA) and CQI: The former is a focused, management-driven method to reactively identify problems and gauge performance relative to an established benchmark.30, 31 CQI, in contrast, is a proactive methodology which, while using sophisticated statistical methods and technological platforms, entails (ideally) a corresponding culture change. Ongoing improvement, rather than attainment of a static benchmark, is the guiding impetus of CQI.30, 31

Higher education in general appears poised for a major paradigm shift from an assessment-centered to an improvement-centered philosophy,32 and medical education, perhaps influenced by the quality “revolution” in health care,33 may be trending ahead of this curve. Indeed, the use of logical systems to improve production processes or product quality, many built around Deming’s34 classic PDSA (plan-do-study-act) cycle, is becoming commonplace in UME. Yet, for reasons mentioned—including rationalizing forces in education—even the most well-intentioned CQI efforts may fall short of their full potential. Several suggestions are offered to help ease the implementation of educational CQI processes which are deliberate, empowering, and sustainable.

Implementation Strategies

Establish a Clear Purpose

An obvious but significant challenge to any educational CQI is initiating and maintaining a focus on teaching and learning and using information reflected in operational outcomes to meaningfully inform these activities. Failing this, the process is likely to devolve into anonymous, unrewarding “shadow labor” or, in clinical parlance, administrative “scut work.” Define the purpose of CQI clearly, early on, and repeatedly. Where possible, coordinate data collection and reporting with existing scheduled activities—like departmental reviews, university accreditation, or strategic planning. Consider keeping a running inventory of tangible, CQI-based actions and results; at our school, this is compiled around associated LCME standards and elements.

Secure Leadership Buy-in

Although “top down” decisions are not always well-received, eliciting explicit buy-in from key leadership is essential to effective educational CQI. Helping leaders envision their roles as educational stakeholders can help garner buy-in and secure necessary tangible and intangible support. How this endorsement is conveyed is also important. Posing the threat of sanctions from accrediting bodies may get the required boxes checked, but it will not necessarily translate into improved curricula—much less eventual culture change. Even if education is paid only cursory lip service, leadership routinely use data on research space, extramural funding, and clinical volumes (for example) in organizational decision-making. The benefits of striving for the same in educational matters should be easily grasped.

Envision a Process

Just as a healthy lifestyle is unlikely to result from sporadic dieting, periodically ramping up CQI efforts is not particularly effective—and runs counter to the LCME’s guiding intent.35 With the emphasis on outcomes, CQI can seem like a series of repeated starts and stops—each culminating with analysis and reflection. Stress continuity of the process, not repetition of the cycle; prevent “standardized” from becoming “routinized.” If possible, illustrate key CQI linkages horizontally and vertically: The former shows connectedness, the latter coordination. Help stakeholders envision CQI as an extended regimen —not a discrete event.

“Humanize” the Process

Although some degree of centralized coordination is necessary, avoid deeding primary ownership of CQI to a distant, impersonal office, committee, or task force—or, worse still, linking it to an administrative policy or regulation. Suskie36 suggests coordinating CQI efforts from faculty development (teaching-learning) centers rather than offices of institutional effectiveness or accreditation. Regardless of where CQI efforts are functionally housed, encourage involvement and nurture partnerships. Perform CQI “with” educational faculty—not “on” them.

Share Results, Accountability

The importance of sharing CQI results cannot be overstated. Whatever the impact, briefly summarize each step in the process. This reinforces the logic and intent of CQI, reiterates actions taken and, again, keeps the process from sinking into the abyss of mandatory reporting. Consider presenting results in focused, more discrete contexts—preferably areas or domains with which stakeholders are most familiar and actively engaged. For example, our college’s CQI plan functions primarily the undergraduate course level.37 Regardless of focus, consider dynamic, online dashboards to disseminate assessment results.38 Build solidarity around a collective commitment to education—shift the impetus from “you must” to “we should.” This also empowers individuals to retain creative control over their respective domains—a general enabling process Harvey and Lynch term “facilitation.”39

Scrutinize Measures, Outcomes

Just as reliability and validity are not inherent to any measure, neither are performance metrics indicative of specific outcomes—or any outcome, for that matter. Exercise a healthy degree of critical review in educational CQI. As Pathman40 notes, we are quick to take credit for learners’ academic successes but reticent to assume responsibility for their failures. Similarly, do not let trends or passing fads distract from basic elements like standard setting or grade calibration.36 Recognize that validly measuring some medicine-specific constructs (e.g., professionalism, lifelong learning, systems-based practice) may demand added diligence and attention.41 Finally, ensure that metrics, however chosen, are widely disseminated and understood.

SUMMARY

Individuals committed to training future generations of physicians recognize the importance of quality in undergraduate medical education and the need to regularly examine, reflect, and improve upon these efforts. For various reasons, however, meaningfully applying a “quality” philosophy is not without challenges—and even the best laid plans to implement formal, ongoing programs to monitor and improve curricular quality can encounter resistance.

Despite calls to extend the focus beyond student outcomes42 and move medicine toward an era of minimal mandatory measurement,9 undergraduate medical educators, in complying with related LCME requirements,35 face similar obstacles as those encountered with QA in health care9—namely, negotiating the tension “between the romance of professional self-regulation, on the one hand, and duty, on the other hand, of professionals in all their roles, including professional educators, to keep track of and respond to information on what society think of and wants from their work”.43

UME may be especially sensitive to these competing interests. As business and academic interests converge,44 amortizing teaching effort45, 46 into a rational, extrinsic reward structure47 could prove particularly problematic to mentored or apprenticeship models of instruction. Moreover, while many CQI approaches may easily accommodate cognitive outcomes, other constructs (e.g., learning environment, professionalism, self-directed learning) may translate less easily into system inputs and outputs.48 Ultimately, the forces redefining UME today could mirror in their effects those which saw chronic illness, technology, and the modern hospital permanently alter the nature and organization of “medical work.”5

The factors driving an increased emphasis on clinical revenue, transparency, and external accountability are unlikely to subside—pushing medical education, and the desire to exert more control over it, toward ever more “rational” evidences of quality and its continued improvement. That said, while educational CQI efforts will never be entirely free of structural constraint, it is within reach of educators to positively influence their focus, impact, and legitimacy.