Abstract
Background
Missed test results are a cause of medical error. Few studies have explored test result management in the inpatient setting.
Objective
To examine test result management practices of general internal medicine providers in the inpatient setting, examine satisfaction with practices, and quantify self-reported delays in result follow-up.
Design
Cross-sectional survey.
Participants
General internal medicine attending physicians and trainees (residents and medical students) at three Canadian teaching hospitals.
Main Measures
Methods used to track test results; satisfaction with these methods; personal encounters with results respondents “wish they had known about sooner.”
Key Results
We received surveys from 33/51 attendings and 99/108 trainees (response rate 83%). Only 40.9% of respondents kept a record of all tests they order, and 50.0% had a system to ensure ordered tests were completed. Methods for tracking test results included typed team sign-out lists (40.7%), electronic health record (EHR) functionality (e.g., the electronic “inbox”) (38.9%), and personal written or typed lists (14.8%). Almost all trainees (97.9%) and attendings (81.2%) reported encountering at least one test result they “wish they had known about sooner” in the past 2 months (p = 0.001). A higher percentage of attendings kept a record of tests pending at hospital discharge compared to trainees (75.0% vs. 35.7%, p < 0.001), used EHR functionality to track tests (71.4% vs. 27.5%, p = 0.004), and reported higher satisfaction with result management (42.4% vs. 12.1% satisfied or very satisfied, p < 0.001).
Conclusions
Canadian physicians report an array of problems managing test results in the inpatient setting. In the context of prior studies from the outpatient setting, our study suggests a need to develop interventions to prevent missed results and avoid potential patient harms.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
INTRODUCTION
Diagnostic tests are critical for modern medical practice. However, a test is only useful if the results are reviewed and translated into action. There is growing appreciation that a significant percentage of tests are simply lost to follow-up.1,2,3,4,5,6 Breakdowns can occur at any stage of the testing process, but recognition of finalized results is particularly vulnerable to error,7, 8 especially during care transitions such as for tests pending at hospital discharge (TPADs).9,10,11,12 In 2004, Poon and colleagues found that only 52% of US outpatient internists kept a record of ordered tests, 32% had a system to detect if a patient failed to receive a test, and 59% were not satisfied with their test result management.13 In 2015, Litchfield et al. found 40% of primary care practices in the UK required patients to phone for abnormal test results and 80% lacked a failsafe to ensure results were received.14 Other studies from the US and UK have also described challenges managing test results;15,16,17,18,19 however, there is little data focusing on inpatient care and, virtually, no literature on test result management in Canada. The lack of investigation in Canada is problematic because the Canadian healthcare system differs from the US and UK with respect to funding and organization and lags far behind most other developed countries with respect to electronic health record (EHR) implementation.20,21,22 Moreover, integrated healthcare delivery systems are underdeveloped,23 resulting in fragmentation and discontinuity when patients transition from hospital to outpatient settings.
The objectives of our study were to (1) explore test result management practices of Canadian general internal medicine (GIM) faculty and trainees providing inpatient care and (2) determine satisfaction with current practices and frequency of self-reported delays in test result follow-up. We hypothesized that problems identified in older studies would persist in current-day Canada.
METHODS
Setting and Participants
We conducted a cross-sectional survey at three University of Toronto teaching hospitals between November 2016 and October 2017. Study sites were Toronto General Hospital (TGH), Toronto Western Hospital (TWH), and Mount Sinai Hospital (MSH). All are tertiary/quaternary care hospitals located in downtown Toronto with most GIM admissions (> 90%) coming as referrals through the emergency department. Participants were either trainees (medical students and residents) working on GIM inpatient services at the time of survey or staff physicians (aka, attendings) who attend on inpatient GIM teaching services. Attendings typically perform > 90% of their clinical work at their base hospital; trainees rotate between hospitals and perform approximately 75% of their clinical work at their base hospital which changes with each academic year.
TGH and TWH use a common EHR system (electronic patient record (EPR); QuadraMed CPR, Herndon, VA); MSH uses Power Chart (Cerner Corp, Kansas City, MO). Both EHRs utilize computer physician order entry and display completed test results. The EPR system includes an inbox function for result review and sign-off. The attending’s inbox automatically reports all new results for inpatients admitted under their name; trainees must take an additional step of assigning themselves to each patient to receive results. There is no inbox function at MSH. Both sites use stand-alone typed electronic sign-out lists that must be manually populated with patient information, active issues, and therapeutic plan used for physician handover.
Survey Tool
We developed a survey (Online Appendix) building upon prior studies of test result management.10, 13, 15, 19 Questions were taken verbatim or adapted as needed to suit our inpatient setting.
The first section collected demographics including age, sex, and level of training. The second section focused on self-reported test ordering volumes which are not germane to the current analysis but may be published in a follow-up manuscript.
The third section focused on individuals’ test result management practices: we asked respondents “do you keep a record of the tests you have ordered?,” “do you have a system to detect if a patient fails to obtain a test you have ordered?,” and “do you keep a record of patients with pending test results at the time of hospital discharge?,” all with yes/no responses. Participants described their methods using free-text responses. We also asked the number of times in the past 2 months respondents encountered a result they “wish they had known about sooner.” We asked about satisfaction with test management systems, concern that ordered tests may not be performed, concern that abnormal results may “fall through the cracks,” and frequency of disclosure and documentation of normal and abnormal test results. These questions used five-point Likert-type responses from 1 representing a negative response (e.g., not at all confident, not important) to 5 representing a positive response (e.g., very confident, very important). We asked respondents for their confidence in “follow-up of clinically significant tests/investigations that are pending at hospital discharge,” again with a five-point Likert-type response. We asked respondents who was responsible for follow-up of studies pending at discharge (options included resident, attending physician, outpatient provider, or patient). Respondents could select multiple options but were also asked to identify the single most responsible individual.
Section 4 focused on education. Trainees were asked “how often do you receive feedback?” on appropriateness of ordered tests, timeliness of test result follow-up, and disclosure of test results to patients. Attendings were asked “how often do you provide feedback?” on the same topics. We pilot tested the survey to refine clarity and content prior to distribution.
Survey Administration
We distributed surveys between November 2016 and October 2017. Trainees completed the surveys during noon teaching conferences. Attending physicians were solicited through email and staff meetings. All surveys were completed anonymously. We calculated that a sample size of 125 completed surveys would provide us with 80% power to detect a 0.5 difference in Likert-type responses for attendings compared to trainees.
Statistical Analysis
We used basic descriptive statistics for respondent demographics. We compared responses of attendings and trainees using chi-square statistic and t test for categorical and continuous responses with p values < 0.05 judged statistically significant. Likert-type responses were dichotomized into positive (4 or 5) or neutral/negative (1 to 3). We conducted stratified analyses to evaluate result management by respondent sex, level of training (medical students vs. residents), and health system (TGH and TWH vs. MSH).
We used logistic regression models to explore the association between self-reported satisfaction (positive Likert 4–5 vs. neutral/negative Likert 1–3) with personal test result management practices and respondent characteristics (attending vs. trainee; male vs. female).
Free-text responses (Online Appendix, Survey Tool, Section 3, Questions 1, 2, and 8) were analyzed using qualitative methods. We developed a preliminary coding scheme based upon test result management strategies that have been cited in the literature; themes were then modified to reflect and encompass the methods reported by respondents. A sample of 33% of surveys was reviewed in duplicate by two study authors to ensure that codes were clear.
Statistical analyses were performed using Microsoft Excel 2013 (Microsoft Corp., Redmond, WA) and R statistical software (version 3.4.0; R Core Team, Vienna, Austria). Institutional review board approval was obtained at each hospital site and the University of Toronto.
RESULTS
Our overall response rate was 83.0% (132/159) [91.7% for trainees (99/108) and 64.7% for attendings (33/51)] (p < 0.001). The mean age of attendings was 42.5 years (37.5% women), and trainees was 27.3 years (43.4% women).
Test Result Management Practices
40.9% of respondents (54/132) reported maintaining a record of tests that they order, and 50.0% (66/132) reported having a system to track if patients fail to receive ordered tests. 19.7% (26/132) were satisfied with their result management systems, while 40.2% (53/132) were concerned that test results may “fall through the cracks.” Comparison of attendings and trainees revealed a number of differences (Table 1).
46.2% of all respondents (60/130) reported keeping a record of TPADs (Table 2). 58.6% (75/128) felt they were aware of 75% or more of all TPADs. Only 33.3% (42/126) reported following up on all clinically significant TPADs, and only 3.9% (5/129) reported listing all clinically significant TPADs in the discharge summary. Both attendings and trainees expressed that the primary person responsible for follow up of TPADs was the attending (Table 2), but that trainees, patients, and the outpatient physician bore some responsibility as well.
Delays in Test Result Follow-up
93.7% of respondents (119/127) reported encountering at least one test result that they “wish they had known about sooner” in the past 2 months, and 29.1% (37/127) reported encountered 5 or more (Fig. 1). Attendings were less likely than trainees to report at least one result they wished they had known about sooner (81.3 vs. 97.9%, p = 0.001).
Self-Described Methods for Managing Test Results
Methods used to manage test results are described in Table 3. Common methods for tracking ordered tests included using the team sign-out list (40.7%, 22/54), the EHR (38.9%, 21/54), and a personal handwritten list (14.8%, 8/54) (respondents could indicate multiple methods). Only 18.5% (10/54) specifically mention using the inbox function at UHN to track ordered tests. Methods differed between attendings and trainees (Table 3).
Among respondents who reported keeping a record of TPADs, 30.0% (18/60) used a hand-written or electronic list, 26.7% (16/60) used EHR functionality, 20.0% (12/60) used the team sign-out list, 5.0% (3/60) used a follow-up appointment, and 3.3% (2/60) used the discharge summary. Trainees were more likely to report using the sign-out list, while attendings were more likely to report using EHR functions such as the inbox (Table 3).
Determinants of Physician Satisfaction
Univariate regression analyses suggested increased satisfaction with test result management strategies among attending physicians compared to trainees (odds ratio (OR) 1.43, 95% CI 1.18–1.83, p < 0.001), for respondents who reported having a system to detect if a patient fails to receive an ordered test compared to respondents with no system (OR 1.27, 95% CI 1.09–1.53, p = 0.002) and those who used the inbox for tracking tests compared to those who did not use the inbox (OR 1.50, 95% CI 1.11–2.31, p = 0.005).
Education on Test Result Management
Attendings were much more likely to report teaching trainees about test ordering practices, test result follow-up, and result disclosure than the trainees were to report receiving such teaching (Fig. 2).
Subgroup Analysis
We found medical students felt that it was important to notify patients of normal test results more often than residents (52 vs. 24%, p = 0.008), and medical students were more likely to self-report documenting having notified patients about normal test results (21 vs. 4%, p = 0.01). There were no other significant differences by level of training or for men compared to women.
In comparisons across sites, respondents at TGH/TWH were more likely to answer yes to having a system for detecting if a patient fails to obtain a test when compared to MSH (58.4 vs. 38.2%, p = 0.02). Respondents from TGH/TWH also had higher levels of satisfaction with test result management (26.0 vs. 10.9%, p = 0.03) (Online Appendix).
DISCUSSION
In a survey of Canadian internal medicine physicians and trainees practicing in the inpatient setting, respondents reported multiple problems with test result management. Problems included delays in recognition of abnormal results, inconsistent utilization of existing EHR test result management tools, dissatisfaction with personal methods for managing results, and lack of agreement for who is responsible for follow-up of TPADs. A number of our findings warrant discussion.
First, it is important to acknowledge the documented harms of missed test results including untreated infections,9, 24 missed malignancies,5, 25, 26 missed aortic aneurysms,27 missed osteoporosis,28 and other abnormalities.29 Much of the existing literature is becoming dated and comes from the US outpatient setting. Our results expand on prior studies by providing contemporary data from the inpatient setting in Canada.
While we are unaware of any longitudinal studies looking at changes in missed results over time, comparing our work with older studies offers a starting point. Specifically, 94% of respondents in our study reported encountering at least one result they “wish they had known about sooner” in the past 2 months and 29% reported encountering 5 or more. By comparison, a 2004 survey of US outpatient internists (using the identical question) found that 83% had encountered one delayed test result in the previous 2 months and 18% had encountered 5 or more.13 Our results suggest that delayed recognition of abnormal test results remains a problem.
Second, we found that attendings and trainees were not using functions in our EHR designed to facilitate test result management. The study hospitals have EHRs with user interfaces for tracking ordered tests, and two sites (TGH and TWH) have an inbox for physician review/sign-off on results. Despite these tools, 59% of respondents reported that they lacked a method to track tests from order entry to completion, with 16% of all survey respondents using the EHR and 8% mentioning the inbox. Our findings are consistent with a 2015 UK study that found that primary care practices did not use features of the EHR specifically designed to make test result management easier.14 It is important to consider that EHR adoption in Canada has been low when compared to other developed countries.30 Future investments will need to consider the socio-technical aspects of EHR adoption.31 If physicians are not taught how EHR functions can improve efficiency and patient safety, it seems logical that physicians would not use these functions and this seems to be supported by our survey.
Our finding that only 20% of respondents were satisfied with their test result management practices (80% of Canadian internal medicine physicians are satisfied with their professional life)32 provides further evidence that there is a problem with test result management. This dissatisfaction is consistent with older outpatient US studies and provides the first evidence we know of extending these results to Canadian inpatient care.13
Third, comparison of responses from attendings and trainees warrants comment. We found lower satisfaction and higher self-reported encounters with delayed result recognition for trainees compared to attendings. It is possible that these results reflect improvements in performance with experience; alternatively, it is possible that attendings have a false sense of confidence. Empirical studies are needed. We also found it interesting that 97% of attendings and 86% of trainees felt the attending was most responsible for TPAD follow-up. Conversely, we view it as troubling that few respondents think trainees should be responsible for follow-up of TPADs even though trainees enter the orders for the vast majority of tests in our hospitals. One could argue that it is inconsistent to allow trainees to order tests, but then absolve them from the responsibility of follow-up.33 Interestingly, with less than 10% of trainees in our study reporting that they had received education on test result management (but 42% of attendings reporting that they teach on this topic), our study seems to highlight an opportunity to improve training at our institution.
Fourth, it is important to consider potential solutions to improve follow-up of test results. Most interventions studied in the outpatient setting have focused on health information technology and EHRs,34,35,36 with mixed evidence in terms of effectiveness.37, 38 Promising interventions for TPADs include enhanced discharge summaries that are auto-populated with test results that are both completed and pending, and notification systems that distribute email alerts when pending results become finalized.39 There are many potential solutions that are largely unstudied. For example, healthcare teams could assign test result follow-up responsibilities to dedicated staff. Patient portals could be enhanced to empower patients to take an active role in checking their own results and notifying their healthcare team when questions arise.40 Patient empowerment has proven effective in other care settings.41, 42 Finally, it is important to consider the role of personal responsibility with respect to test result follow-up.43 It may be reasonable to argue that the individual who orders the test should be expected to follow-up on the result.44
Our study has limitations that warrant mention. First, our study was conducted at three Toronto teaching hospitals and results should be generalized with care. Second, our study relies on self-report; future studies should use an alternative method such as medical record review or review of EHR audit logs to assess the magnitude of the problem of missed results. For example, when asking respondents about results they “wish they had known about sooner,” we were not able to identify the type of result or why. Third, we lack quantitative data on the types of tests being ordered and missed. Finally, our study focused on inpatient internal medicine wards and it will be important to verify our results in other inpatient services.
In conclusion, we found that deficiencies in the test result management process identified more than a decade ago persist in contemporary Canadian inpatient teaching hospitals. While a number of promising solutions are emerging, rigorous evaluation and widespread implementation remain distant.
References
Callen J, Georgiou A, Li J, Westbrook JI. The safety implications of missed test results for hospitalised patients: a systematic review. BMJ Qual Saf. 2011;20:194–9.
Callen J, Westbrook JI, Georgiou A, Li J. Failure to follow-up test results for ambulatory patients: a systematic review. J Gen Intern Med. 2011;27(10):1334–48.
Hickner J, Graham DG, Elder NC, et al. Testing process errors and their harms and consequences reported from family medicine practices: a study of the American Academy of Family Physicians National Research Network. Qual Saf Health Care. 2008;17:194–200.
Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med. 2009;169(20):1881–7.
Wahls TL, Cram P. The frequency of missed test results and associated treatment delays in a highly computerized health system. BMC Fam Pract. 2007;8(32):1–8.
Wahls TL, Haugen T, Cram P. The continuing problem of missed test results in an integrated health system with an advanced electronic medical record. Jt Comm J Qual Patient Saf. 2007;33(8):485–92.
Hawkins R. Managing the pre- and post-analytical phases of the total testing process. Ann Lab Med. 2012;32:5–16.
Plebani M. The detection and prevention of errors in laboratory medicine. Ann Clin Biochem. 2010;47:101–10.
El-Kareh R, Roy C, Brodsky G, Perencevich M, Poon EG. Incidence and predictors of microbiology results returning postdischarge and requiring follow-up. J Hosp Med. 2011;6(5):291–6.
Kantor MA, Evans KH, Shieh L. Pending studies at hospital discharge: a pre-post analysis of an electronic medical record tool to improve communication at hospital discharge. J Gen Intern Med. 2015;30(3):312–8.
Kripalani S, LeFevre F, Phillips CO, Williams MV, Basaviah P, Baker DW. Deficits in communication and information transfer between hospital-based and primary care physicians: implications for patient safety and continuity of care. JAMA. 2007;297(8):831–41.
Roy CL, Poon EG, Karson AS, et al. Patient safety concerns arising from test results that return after hospital discharge. Ann Intern Med. 2005;143(2):121–8.
Poon EG, Gandhi TK, Sequist T, Murff HJ, Karson AS, Bates DW. “I wish I had seen this test result earlier!”: Dissatisfaction with test result management systems in primary care. Arch Intern Med. 2004;164(20):2223–8.
Litchfield I, Bentham L, Lilford R, McManus RJ, Hill A, Greenfield S. Test result communication in primary care: a survey of current practice. BMJ Qual Saf. 2015;24(11):691–9.
Boohaker EA, Ward RE, Uman JE, McCarthy BD. Patient notification and follow-up of abnormal test results: A physician survey. Arch Intern Med. 1996;156(3):327–33.
Elder NC, McEwen TR, Flach JM, Gallimore JJ. Management of test results in family medicine offices. Ann Fam Med. 2009;7(4):343–51.
Litchfield I, Bentham L, Hill A, McManus RJ, Lilford R, Greenfield S. Routine failures in the process for blood testing and the communication of results to patients in primary care in the UK: a qualitative exploration of patient and provider perspectives. BMJ Qual Saf. 2015;24(11):681–90.
Menon S, Smith MW, Sittig DF, et al. How context affects electronic health record-based test result follow-up: a mixed-methods evaluation. BMJ Open. 2014;4(11):e005985.
Shirts BH, Perera S, Hanlon JT, et al. Provider management of and satisfaction with laboratory testing in the nursing home setting: results of a national internet-based survey. J Am Med Dir Assoc. 2009;10(3):161–6.
Collier R. National physician survey: EMR use at 75%. CMAJ. 2015;187(1):E17–8.
Schoen C, Osborn R, Squires D, et al. A survey of primary care doctors in ten countries shows progress in use of health information technology, less in other areas. Health Aff. 2012;31(12):2805–16.
Squires D, Anderson C. US health care from a global perspective: spending, use of services, prices, and health in 13 countries. Issue Brief (Commonw Fund). 2015;15:1–15.
Brown AD, Pister PW, Naylor CD. Regionalization does not equal integration. Healthc Pap. 2016;16(1):4–6.
Greenes DS, Fleisher GR, Kohane I. Potential impact of a computerized system to report late-arriving laboratory results in the emergency department. Pediatr Emerg Care. 2000;16(5):313–5.
Chen ZJ, Kammer D, Bond JH, Ho SB. Evaluating follow-up of positive fecal occult blood test results: lessons learned. J Healthc Qual. 2007;29(5):16–20.
Choksi VR, Marn CS, Bell Y, Carlos R. Efficiency of a semiautomated coding and review process for notification of critical findings in diagnostic imaging. Am J Roentgenol. 2006;186(4):933–6.
Gordon JR, Wahls T, Carlos R, Pipinos II, Rosenthal GE, Cram P. Failure to recognize newly identified aortic dilations in a health care system with an advanced electronic medical record. Ann Int Med. 2009;151(1):21–7.
Cram P, Rosenthal GE, Ohsfeldt R, Wallace RB, Schlechte J, Schiff GD. Failure to recognize and act on abnormal test results: the case of screening bone densitometry. Jt Comm J Qual Patient Saf. 2005;31(2):90–7.
Schiff GD, Kim S, Krosnjar N, et al. Missed hypothyroidism diagnosis uncovered by linking laboratory and pharmacy data. Arch Intern Med. 2005;165(5):574–7.
Schoen C, Osborn R, Squires D, et al. A survey of primary care doctors in ten countries shows progress in use of health information technology, less in other areas. Health Aff 2012;31(12):2805–16.
Carayon P, Bass E, Bellandi T, Gurses A, Hallbeck S, Mollo V. Socio-technical systems analysis in health care: a research agenda. IIE Trans Healthc Syst Eng. 2011;1(1):145–60.
Canadian Medical Association. Physician Data Center (PDC): physician workforce surveys. 2017. Available at: https://www.cma.ca/En/Pages/physician-workforce-surveys.aspx. Accessed May 29, 2018.
College of Physicians and Surgeons of Ontario (CPSO). Public and physician advisory service: policy statement 1-11# test results management. Dialogue. 2011;1–5.
Bates DW, Cohen M, Leape LL, Overhage JM, Shabot MM, Sheridan T. Reducing the frequency of errors in medicine using information technology. J Am Med Inform Assoc. 2001;8(4):299–308.
Poon EG, Wang SJ, Gandhi TK, Bates DW, Kuperman GJ. Design and implementation of a comprehensive outpatient results manager. J Biomed Inform. 2003;36(1–2):80–91.
Singh H, Arora HS, Vij MS, Rao R, Khan MM, Petersen LA. Communication outcomes of critical imaging results in a computerized notification system. J Am Med Inform Assoc. 2007;14(4):459–66.
Murphy DR, Meyer AN, Russo E, Sittig DF, Wei L, Singh H. The burden of inbox notifications in commercial electronic health records. JAMA Intern Med. 2016;176(4):559–60.
Singh H, Naik AD, Rap R, Petersen LA. Reducing diagnostic errors through effective communication: harnessing the power of information technology. J Gen Intern Med. 2008;23(4):489–94.
Darragh PJ, Bodley T, Orchanian-Cheff A, Shojania KG, Kwan JL, Cram P. A systematic review of interventions to follow-up test results pending at discharge. J Gen Intern Med. 2018;33(5):750–8.
Goldzweig CL, Orshansky G, Paige NM, et al. Electronic patient portals: evidence on health outcomes, satisfaction, efficiency, and attitudes: a systematic review. Ann Intern Med. 2013;159(10):677–87.
Hibbard JH, Stockyard J, Mahoney ER, Tusler M. Development of the Patient Activation Measure (PAM): conceptualizing and measuring activation in patients and consumers. Health Serv Res. 2004;39(4–1):1005–26.
Wagner EH. Chronic disease management: what will it take to improve care for chronic illness? Eff Clin Pract. 1998;1:2–4.
Moriates C, Wachter RM. Accountability in patient safety. Patient safety network: perspectives on safety. 2016. Available at: https://psnet.ahrq.gov/perspectives. Accessed June 15, 2018.
McTiernan P, Wachter RM, Meyer GS, Gandhi TK. Patient safety is not elective: a debate at the NPSF patient safety congress. BMJ Qual Saf. 2015;24(2):162–6.
Prior Presentations
Preliminary data were presented at the Society of General Internal Medicine Annual Meeting, Washington DC, April 19–22, 2017.
Funding
PC was supported by a K24 award from NIAMS (AR062133) at the US NIH.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they do not have conflicts of interest.
Electronic supplementary material
ESM 1
(DOCX 25 kb)
Rights and permissions
About this article
Cite this article
Bodley, T., Kwan, J.L., Matelski, J. et al. Test Result Management Practices of Canadian Internal Medicine Physicians and Trainees. J GEN INTERN MED 34, 118–124 (2019). https://doi.org/10.1007/s11606-018-4656-7
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11606-018-4656-7