In a widely quoted New Yorker piece published last year (October 3, 2011), Atul Gawande urges doctors to embrace coaching as a means to improve clinical performance. Using a series of compelling stories, Gawande shows how coaching makes a huge difference to athletes, musicians, and business executives. He describes how inviting an old surgical mentor into the operating room led to some surprising insights—and better surgical results.

Many internists and other cognitive specialists may have finished Gawande’s piece thinking: coaching sounds great for surgeons and other proceduralists, but what’s in it for me? However, communicating with patients is the ‘procedure’ most often used by general internists and other cognitive specialists, and like other procedures, can be learned and refined through guided practice. Conducting high-stakes conversations such as breaking bad news, mediating disagreements between family members, motivating change in health behaviors, and conducting family meetings are all important moments in the physician-patient relationship and are, in their own way, just as challenging, complex, and momentous as a difficult surgical procedure. Yet in training and in practice, we are rarely observed, coached, and asked to reflect on our communication skills. If faculty and trainees alike do not continue to improve past whatever plateau they are able to achieve on their own, is it really any wonder they get stuck?

This issue of JGIM features several articles that address the issue of how to support and motivate performance improvement. At the organizational level, the effects of well-meaning interventions are difficult to anticipate fully. As described by Powell et al., the VA implemented a number of performance standards as part of its re-engineering process during the Clinton administration. These standards were intended to improve clinical quality of care and were initially effective in doing so. More recently, however, there have been unintended consequences. Powell et al. show that in attempting to comply with computerized reminders, VA staff encountered instances of inappropriate clinical care, decreased provider focus on patient concerns, and diminished patient autonomy. Non-physician staff also described resentment at doing work that would help physicians achieve financial rewards. In an accompanying editorial, former VA Undersecretary for Health Kenneth Kizer highlights the importance of local input in developing standards that make sense for a particular practice community and the patients they serve.

Physicians are under no greater scrutiny at any time in their professional lives than during residency training. However, many trainees in internal medicine complete their residency seldom having been observed performing a complete history and physical examination. The quality of feedback is also highly variable, with most end-of-rotation evaluation reports containing little constructive criticism. One reason (as suggested by Cavalcanti and Detsky in JAMA, September 7, 2011) is the difficulty in reconciling the dual roles of coach and evaluator. Although there have been proposals to separate the roles (for example, see MJ Gordon, Academic Medicine, October 1997), the idea has not caught on. The most likely explanation is that the exigencies of a busy ward service preclude the kind of detailed observation and feedback that residents need and that at least some attendings would like to provide. In this issue, Ratanawongsa et al. test this hypothesis by evaluating a radically redesigned inpatient rotation at Johns Hopkins Bayview Medical Center, where patient volume was literally cut in half. Attendings were required to observe and give feedback to residents following several mandatory activities, including post-discharge telephone calls to all patients, home visits to select patients, telephone calls with outpatient providers, and structured interviews about medications. As a result, residents reported improved knowledge of their patients and patients reported better satisfaction with physician care.

Feedback is most effective when it is immediate, specific, balanced, and behaviorally focused. In contrast, the feedback given to speakers at internal and external medical education conferences is usually delayed, vague, and evaluative. The article by Wittich et al. describes an attempt to break this mold. The authors redesigned the standard evaluation form used by the Mayo Medical School for its Continuing Medical Education programs. The form successfully elicited more balanced and behavior-specific feedback from course attendees.

While there is no doubt that incentives, coaching, and feedback have their place, there is still no substitute for practice—10,000 hours of practice for complex skills, if Malcolm Gladwell is to be believed. In his “Eulogy to Overnight Call,” Christopher Moriates laments the disappearance of what now seems, in retrospect, an unparalleled opportunity to watch acute disease evolve over time, quietly reflect on one’s successes and failures, and test one’s mettle as a physician (in the absence of those pesky attendings). In fairness, the author also acknowledges the bone wracking fatigue, irritability, depression, and loss of compassion that could also accompany 36–hour shifts. In any case, work–hour limitations are a fact of life. If we are to help residents achieve their very best medical selves in a fixed number of hours, we must get much better at providing feedback. We know, however, that giving effective feedback isn’t easy. Some of us might want to think about getting a coach.