What combination of clinical, evaluative, and didactic residency experiences makes the best internist? Despite 100 years of published standards for residency sponsorship, and 30 years after the formation of the Accreditation Council on Graduate Medical Education (ACGME), we know little about how many patients residents should see, how many hours of conference they should attend, what they should read, or how long they should train. We have few validated measures of an individual resident’s competence (nor of a residency program’s). There are many reasons for our lack of evidence-based residency training: inpatient service needs of sponsoring institutions, the cost of experimentation and innovation, and the burdensome, cookie cutter-like, process-based requirements imposed by the ACGME, just to name a few.

In 2004, the Long Range Planning Committee of the ACGME’s Residency Review Committee in Internal Medicine (RRC-IM) embarked on a venture to enhance innovation in programs, to measure and improve patient-related and resident-related outcomes, and to decrease the burden of accreditation.1 As a first step in accreditation reform, the RRC-IM developed the Educational Innovation Project (EIP), in which high-functioning programs could apply for a 10-year review cycle by submitting substantive proposals for innovations in training that could lead to improvements in patient outcomes and in measurable resident outcomes. The programs would be held to different, simpler, more outcomes-based requirements.2 Twenty-one categorical internal medicine programs were ultimately admitted to this pathway in 2006 and 2007.

One of those programs, the University of Cincinnati, reports on their progress in the area of resident medical knowledge competence in this issue of JGIM.3 Mathis and colleagues show that their novel year-long ambulatory and elective block rotation combined with an intensive, year-long multiple choice testing and feedback program led to substantial increases in resident scores on the Internal Medicine In-Training Examination (IM-ITE). The program had noted that their resident scores had been declining. In the redesign of their training program under EIP, the program had instituted a 12-month-long all-ambulatory-and-elective clinical year, or “long block,” giving residents a wider breadth of training coupled with fewer call nights (and perhaps more reading time). Program leaders, with help from the American Board of Internal Medicine (ABIM), used questions from the American College of Physician’s Medical Knowledge Self Assessment Program (MKSAP) and ABIM Self Evaluation Program (SEP) modules in monthly testing and discussion sessions. Residents who participated in the long clinical block and the testing program improved IM-ITE scores by 8.5% compared to historical controls who had neither the testing nor the clinical exposure of the “long block.” This is a remarkable improvement in ITE scores. Previous work by McDonald and colleagues suggests that this is the equivalent increase attributable to attending over 200 more core curriculum residency conferences in a year, or to reading 230 additional hours, above baseline.4,5 I know of no other intervention that has been shown to produce such a dramatic increase in scores so quickly.

Some of the improvement the authors saw clearly could be attributed to the enforced test-taking and question-answering; residents in a recent multi-institution study read few total hours (77% read less than 7 h per week)6; thus, mandatory conference time and question practice may have augmented low levels of outside reading. However, in that same study, the authors found that residents mostly read in response to clinical encounters. What is particularly intriguing about Cincinnati’s “long block” study, but impossible to extricate, is that part of the increase might have been brought about by the dramatic change in the clinical content of the year compared with previous inpatient-focused years. Residents spend the long block seeing outpatients three sessions a week, and add around that a series of outpatient and inpatient elective experiences and research. As the authors note, the clinical experiences are broad rather than the “narrow variety of inpatient diagnoses seen in (their) traditional residency.” In order to generalize some of the group’s findings to the broader educational community, it would be very important to understand how the clinical volume, breadth, and lower duty hours (including self-directed reading hours about patients rather than about multiple choice questions) may have impacted the knowledge outcome.

Previously, this EIP group reported that their change to the “long block” had improved patient care and patient satisfaction in their resident practice, resident satisfaction with the learning environment, and resident satisfaction with ambulatory training.7,8 For example, in the resident practice pre-long block, only 7.7% of women had obtained a Pap smear in the previous 3 years, 35% of diabetics had a foot exam in the previous year, and 28% of patients had up to date tetanus vaccines. Just 1 year later, after the first “long block,” those numbers had improved to 62%, 60%, and 60%, respectively.

Thus, Mathis and colleagues have shown that their revolutionary “long block” of ambulatory training improves resident competency outcomes and patient outcomes, at least compared to historical controls in their own institution. Should all residency programs now move to a “long block” model? Perhaps we should be allowed to, but not required to, at least not yet.

EIP programs are experimenting with several other aspects of residency training including duty hours structures, handoff strategies, admitting schemes, and milestone use for residency advancement. Hennepin County, Duke, and the University of California, San Francisco, all began EIP with similar plans to put residents into short (about a month) blocks of outpatient practice shared with a “practice partner” on a mirrored schedule. At Beth Israel Deaconess, we are using a different design of one intensive “practice week” every 6 weeks, with continuity clinic weekly during only 2 of the other 6 weeks. Each design has shown promise, though resident competency outcomes and patient quality outcomes have not been published to date. EIP programs working on resident clinic practice redesign are participating in the EIP Continuity Measurement Work Group, a pilot to assess the current structure of continuity training at multiple participating institutions and determine how structure relates to continuity of patient care, patient satisfaction, resident satisfaction, and quality of care indicators.9 Soon we may be closer to an understanding of what evidence-based ambulatory training looks like. The innovation and outside-the-box thinking allowed and encouraged under EIP, along with the mandate to measure patient care quality, combine in ways that can inform what resident structures might truly be “better” or “best.”

What does it take to fuel sustained innovation in residency training? The Cincinnati group and the other EIP programs have had a number of potentially important advantages:

  1. 1.

    EIP programs were chosen from a group of highly successful programs marked by already-long accreditation cycles and motivated to write 20-page applications. The “carrot” was a combination of relaxed accreditation rules and a 10-year period before their next ACGME site visit.

  2. 2.

    What may be more important than a long site visit cycle is an annual reporting structure in which each program sets goals and reports on their success or failure in achieving the goals to the ACGME. Although site visits are scheduled 10 years apart, annual reports to the accrediting body do have a certain…high stakes feel.

  3. 3.

    Institutions that applied for the EIP were required to commit resources to help guarantee successful goal attainment. Mathis, Warm, and colleagues note that removing residents from the inpatient service to allow the “long block” required newly uncovered beds. Another resource required is that the program director (PD) must commit—and be supported—at 75% time. While some PDs at large institutions already have that amount of salary support, most EIP program directors probably did not—the additional time, required by the ACGME, likely contributes to their ability to make change.

  4. 4.

    The ACGME requires the programs meet together annually at a program director’s association meeting, and that they disseminate EIP results locally and nationally. This meeting of motivated, curious, and incentivized educators has turned into a connected small community, spawning work groups centered on similar projects, collegial visits to each other’s programs, and an opportunity to stay energized.

  5. 5.

    The ABIM partnered with Cincinnati, lending resources (SEP module questions, for example) and expertise in measurement to help with the medical knowledge competency assessment.

The ACGME’s RRC-IM has restructured the IM program requirements around many of the EIP program requirements: the IM requirements that went into affect 2 years ago were 25% shorter, included 20% fewer process measures, and focused on outcomes assessment.10 It is possible that the outcomes-oriented requirements might alone encourage innovation in training, but there is more to do. Institutions must help programs to implement innovative models and to measure the model outcomes, as in Cincinnati. Understanding the quality of care given by individual residents or a group of residents can be challenging in systems set up to measure institution-wide outcomes, but program directors need to advocate for and educate leaders about their data needs. APDIM should encourage innovation and connection between programs beyond EIP, encouraging collaborative work on specific answerable training questions.

And while any program can submit an “innovation request” to the RRC-IM asking to model the “long block” in their own program, pieces of Cincinnati’s structure remain outside the standard IM program requirements, and the innovations request process can be bureaucratic. Cincinnati’s data are convincing that their long block combined with structured testing can improve resident knowledge and patient care. It’s time to put that innovation into more widespread practice.