Introduction

The delivery of laboratory tests consists of various phases from the pre-preanalytical phase where the clinician decides which test to order, to the post-postanalytical phase where the clinician decides on appropriate treatment after receiving the laboratory result. This has been dubbed the “brain-to-brain loop in laboratory testing” as first described by Lundberg [1, 2]. There is a chance of errors in any of these phases and there is a need for studies of errors specifically in the extra-analytical phases of laboratory testing. It is estimated that 70–80% of all health care decisions affecting diagnosis, treatment and follow-up of patients involve pathology investigations and laboratory errors may be associated with inappropriate patient care in 6.3–24.4% of cases [3,4,5]. Up to 73% of laboratory errors may be preventable [5].

The Institute of Medicine (IOM) published a report that focused on the impact of errors in medical care and this highlighted the problem of laboratory errors [6]. Laboratory errors are defined by the International Organization for Standardization (ISO) as “failure of a planned action to be completed as intended, or use of a wrong plan to achieve an aim, occurring at any part of the laboratory cycle, from ordering examinations to reporting results and appropriately interpreting and reacting to them” [7]. The Royal College of Pathologists of Australasia states that “the responsibility of pathology providers commences with the receipt of a request for a pathology test/investigation, and continues until the outcomes are communicated to the requester” [8].

The pre-preanalytical phase is where the clinician actually decides on which test to request from the laboratory. Recent literature has found that inappropriate tests are being requested mainly due to the plethora of new tests available, patient knowledge and fear of litigation by the requesting clinician.

The preanalytical phase is a labour-intensive phase from the time a sample gets to the laboratory until it ready to be analysed. Most errors in this phase are due to the human factor and lack of harmonization of these processes [4, 9,10,11]. Reports indicate that up to 70% of laboratory errors occur in this phase [10, 11]. The recognition of these errors is important as they may influence patient care [12].

The analytical phase involves the actual performance of the laboratory test, i.e. the measurement of the analyte, the validation of the result and the release of the result for review. Due to improvements in laboratory assays, automation, improved quality control (QC) practices and calibration, the least errors now occur in this phase [4, 13,14,15]. Standardization of assays has also led to decreased interlaboratory variations and harmonization of the analytical phase. External QC schemes are regularly used by laboratories to monitor their performance in this phase which has led to a drastic reduction in errors.

The postanalytical phase involves amongst others the reporting of results to clinicians, communication of critical values and turnaround time (TAT) [16].

And finally, the post-postanalytical phase refers to the clinician’s response a laboratory test result. Strictly speaking, this phase should also not fall under the laboratory’s responsibility, however if the laboratory is sending out incorrect results, it may contribute to post-postanalytical errors.

There has been a recent interest in errors in the extra-analytical phases with entire working groups and congresses being dedicated to this [17]. For the purpose of this review we will only focus on the extra-analytical phases in clinical chemistry, as to cover errors in other areas of pathology will be beyond the scope of this review. We will also discuss some studies performed both in our laboratory and others highlighting the importance of these errors.

The Pre-Preanalytical Phase

This phase describes the phase before the test is even ordered where the clinician decides which test to order [18]. With the recent explosion of new tests on the market, this is often a difficult decision and clinicians cite many reasons for ordering tests [19]. Strictly following the definitions of laboratory errors, these would not fall under the responsibility of the laboratory. However the chemical pathologist has a certain responsibility to educate the clinicians to ensure optimal use of pathology services.

Evidence Based Medicine (EBM) and Evidence Based Laboratory Medicine (EBLM)

With the explosion of new medical knowledge and more than 20,000 journals constantly publishing their findings, clinicians struggle to stay ahead of developing knowledge [20]. A well-intended clinician may order the wrong test and he/she needs to evaluate the evidence to decide whether the test is appropriate [20]. The concept of EBM was first described by Sackett et al. as “the conscientious, explicit and judicious use of current best evidence, in making decisions about the care of patients [21]. Evidence-based guidelines are useful for appropriate test selection [14, 22, 23].

Demand Management and Test Requests

The use of laboratory tests increases annually at a substantial cost. These increases may be due to an aging population with more patients with chronic diseases, better informed patients due to self-diagnosis by “Dr Google” requesting tests from the treating clinician, the availability of more tests with quick turnaround times, and fear of litigation by the clinician and therefore defensive testing [24,25,26]. These tests may be appropriate or inappropriate. Inappropriate tests not only lead to increased costs and wasted labour, but may also lead to unnecessary further investigation of the patient with associated anxiety.

Demand management studies are important to determine if tests are really needed [24, 27, 28]. Demand management within a health-care system can be defined as manipulating the use of health resources to maximise their utility and it ensures the right test on right patient at right time. [28] Electronic gatekeeping (eGK) has been introduced at many laboratories as a means of cost cutting and demand management. This involves ordering of tests depending on a minimum retest interval to prevent repeat testing. We recently performed a study at our unit examining the use of eGK as a demand management tool and found that it led to a significant cost saving without affecting patient care [29].

Needlestick Injuries

Needlestick injuries and the associated risk of human immunodeficiency virus (HIV), hepatitis C (HCV) and hepatitis B virus (HBV) infection is also a potential pre-preanalytical error. Proper phlebotomy training of staff may help to prevent these [11, 30]. Other strategies to prevent these injuries include effective vaccination against HBV, safer use and disposal of needles and provision of safety-engineered devices [30]. Standard operating procedures detailing the correct procedures and the actions to be taken in the event of a needlestick injury need to be available and read by all staff.

The Preanalytical Phase

This phase has recently been found to have the highest incidence of laboratory errors [4, 10, 11]. and recent interest in this phase has led to increased publications and congresses pertaining to this phase [17, 31]. The main underlying problem with preanalytical phase is that processes are not harmonized, leading to potential errors [17, 32]. Harmonization of processes in this phase and the introduction of a quality monitoring system may lead to a decreased risk of errors. Preanalytical errors may not only impact on patient care, but may also contribute to increased healthcare costs. A preanalytical error may contribute to 0.023–1.2% of total hospital operating costs [33]. Table 1 lists some of preanalytical variables which may contribute to errors.

Table 1 Preanalytical variables [34]

Laboratory Forms

The first step in the preanalytical phase starts with the filling in of forms which actually falls under pre-preanalytical phase. Details required on the request forms include: the clinician’s details (name and contact number), patient details (name, date of birth, gender and ward), diagnosis and medication to help interpretation of results and test requested. Inadequate patient information may confuse interpretation of the result. As clinicians are notorious for their handwriting, incorrect data capturing is often one of the first sources of preanalytical errors. A study by Atay et al. found that unintelligible requests, missing input of tests and erroneous coding were common errors in the pre-preanalytical phase [35]. Also, the data capturers or preanalytical staff may be unfamiliar with medical terms and abbreviations and may not be used to interpreting clinicians’ “hieroglyphics”. Missing patient details may lead to a test not being reported correctly, e.g. an estimated glomerular filtration rate may not be calculated without information on age or gender. Some laboratories in the developed world use electronic request forms which may prevent many of these errors occurring, however in developing countries this is still a far off dream.

Sample Rejection

Often incorrect or insufficient samples are sent to the laboratory leading to sample rejection. Most samples (40–70%) are rejected due to haemolysis [36]. Other cause are clotted specimen and insufficient volume [35]. A sample rejection study examining causes of sample rejection for chemistry and haematology samples found that inadequate sample and inappropriate clotting were the most common causes of sample rejection [37]. However, we may have underestimated haemolysis as a rejection factor, as haemolysed samples are automatically rejected when they get to our laboratory. New automated instruments use the haemolysis index to detect sample haemolysis [11]. After haemolysis, lipaemia is the most frequent endogenous interference that can influence results of various laboratory methods by several mechanisms. The most common preanalytical cause of lipaemic samples is inadequate time of blood sampling after the meal or parenteral administration of synthetic lipid emulsions. Although the best way of detecting the degree of lipaemia is measuring lipaemic index on analytical platforms, laboratory experts should be aware of its problems, like false positive results and lack of standardization between manufacturers. Unlike other interferences, lipaemia can be removed and measurement can be performed in a clear sample. However, a protocol for removing lipids from the sample has to be chosen carefully, since it is dependent on the analytes that have to be determined [38].

Contrast Medium Interference

The use of contrast media such as organic iodine molecules and gadolinium contrast agents is commonplace in diagnostic imaging. Overall, the described interference for iodinate contrast media includes inappropriate gel barrier formation in blood tubes, the appearance of abnormal peaks in capillary zone electrophoresis of serum proteins, and a positive bias in the assessment of cardiac troponin I with one immunoassay. The interference for gadolinium contrast agents includes negative bias in calcium assessment with ortho-cresolphthalein colorimetric assays and occasional positive bias using some Arsenazo reagents, negative bias in measurement of angiotensin converting enzyme and zinc when colorimetric assay is used, positive bias for creatinine using the Jaffe reaction, interference with total iron binding capacity using the ferrozine method, magnesium using calmagite reagent and selenium determination by mass spectrometry measurement. Interference has also been reported in assessment of serum indices, pulse oximetry and methaemoglobin in samples of patients receiving Patent Blue V. Since the elimination half-life of these compounds is typically less than 2 hours, blood collection after this period may be a safer alternative in patients who have received contrast media for diagnostic purposes [39].

Urine Samples

Despite the existing guidelines, the importance of a proper preanalytical procedure for collecting urine specimens is usually not known by the patients. In a recent paper, Miler et al. showed that a 24-hour urine sample was not properly collected in more than half of the informed outpatients, most of who were older than 65 years and suffering from a chronic disease. The prescribed instructions were not followed, some volume of the urine sample was discarded or an improper container was used. To decrease the number of errors in the preanalytical phase, laboratory staff, general practitioners and patients should be educated and active promoting of preanalytical procedures by the laboratory staff should be encouraged. In case of an incorrect sample procedure, the urine collection should be repeated [40]. Procedures for collection of urine samples need to be standardised and guidelines have been published [11, 41].

Incorrect Sample and Order of Draw

Another potential source of pre-analytical error is a sample collected in the incorrect tube. Certain contaminations such as K-EDTA contamination leading to spuriously increased potassium levels and decreased calcium levels are well described. Even incorrect order of blood drawing may lead to this interference, leading to Clinical Laboratory Standards Institute (CLSI) [42] and World Health Organization (WHO) [43] publishing guidelines for “order of draw”. According to these guidelines, blood culture or sterile tubes are filled first, followed by plain/gel tubes and lastly tubes with additives such as citrate or K-EDTA. Another potential cause of sample rejection may be a blood sample taken distally to an indwelling venous line leading to dilution of some analytes and inappropriate increases of others depending on the contents of the drip [4].

Patient Preparation

Patient preparation is important, as certain analytes may be influenced by factors such as physical activity, recent food intake and medication [44]. As the standardization of “fasting” and the effect that fasting has on other analytes is still not well defined, the working group on preanalytical errors (WG-PRE) has put forward some recommendations for consideration [45]. Another potential cause of preanalytical error at this stage is prolonged application of tourniquet when difficulty is experienced obtaining a blood sample leading to false levels of certain analytes such as total protein. The introduction of transilluminating devices to assist with phlebotomy may help under these circumstances [9]. Intake of certain substances such as caffeine, nicotine and alcohol may also influence the levels of various analytes and the posture of the patient must also be considered [46]. Several lines of evidence attest that short, middle, and long-term exercise, as well as the relative intensity of physical effort (from mild to strenuous), may influence a broad array of laboratory variables. The amount of extracellular release and clearance from blood of most of these biomarkers is markedly influenced by the biological characteristics of the molecule(s), level of training, type, intensity and duration of exercise, and time of recovery after training. It is hence noteworthy that test results that fall outside the conventional reference ranges in athletes not only may reflect the presence of a given disease, but may frequently mirror an adaptation to regular training or changes that have occurred during and/or following strenuous exercise, and which should be clearly acknowledged to prevent misinterpretation of laboratory data [44]. Another potential error at this stage is incorrect patient identification or when the sample is drawn on the wrong patient. Care should always be taken to check the identification and make sure that the same information is submitted on the sample and the request form.

Time of Sampling

The time that the blood sample is taken may also influence the result and be a cause of preanalytical error. Certain analytes such as cortisol have a circadian rhythm and should be taken at a specific time. An increased midnight value (when levels should be low) is one of the screening tests for Cushing’s syndrome. Other analytes, such as reproductive hormones in premenstrual females may be influenced by the time of the month they are analysed. An example for this is the 21 day progesterone level used to screen for the presence of ovulation. And yet others, such as vitamin D levels may be influenced by seasonal factors. Biological variation (between and within subject) is also a potential preanalytical influence on laboratory testing and needs to be considered [11].

Transport to the Laboratory

Transport conditions to the laboratory need to be standardized with respect to time to get to the laboratory and temperature to prevent further preanalytical errors. The correct temperature also needs to be maintained. For example, refrigeration at 4 °C inhibits the Na +-K + ATPase pump resulting in potassium leaking out of cells with spuriously high potassium levels [46].

Data Capturing/Personnel Problems

This is an extremely error prone phase as it is exceptionally labour intensive and prone to human error. As mentioned, incorrectly filled in request form or untidy handwriting may precipitate errors at this stage, especially where non-medically trained staff try to interpret these forms. The preanalytical reception area is one of the most error prone areas of the laboratory and incidentally also the one most reliant on humans. Another problem being encountered is where economic pressure and downsizing has led to staff shortages with decreased morale in remaining staff [47].

Sample Preparation

Once in the laboratory, the sample needs to be centrifuged and aliquotted before being taken to the analytical bench. These steps introduce more steps for error and the use of automated workstations may prevent these errors [48]. Another potential error at this stage of the testing cycle may be incorrect labelling with the result being reported on the incorrect patient.

The Postanalytical Phase

This phase involves the reporting of the laboratory results to the requesting clinician.

Turnaround Time (TAT)

The TAT refers to the time that it takes for the laboratory result to reach clinician. Increased TAT is one of the most common postanalytical errors and causes of customer complaints in the laboratory [4]. Delays in any phase of the total testing process may lead to an increased TAT and delay in patient treatment with subsequent impact on patient care and increased costs.

Interpretative Comments

The quality of interpretative comments may be a source of postanalytical error and various authors have discussed the advantage of introducing standardized interpretative comments for this purpose [49,50,51]. How the clinician interprets the results may also be a potential error. Incorrect patient information on the request form (a preanalytical error) may lead to incorrect interpretative comments (a postanalytical error) [52].

Communication of Critical Results

Lundberg first reported the importance of critical result communication in 1972 [53] and ISO 15189 requires that critical results be communicated as quickly as possible to the treating clinician [54]. A large source of postanalytical error involves the communication of critical results to the clinician and standardization of which critical values need to be communicated [16, 22, 54]. Incompletely filled in laboratory request forms may also have a negative influence on the communication of critical results [55]. Critical results need to be communicated to the clinicians and errors may occur here. A study at our institution found a 10.8% error rate with the communication of critical results [56].

Unnecessary Repeat Testing

An accepted practice in many laboratories is to repeat critical results to confirm them before they are communicated to the clinicians. However, we and other studies have found this practice not to be of any benefit and to only increase TAT [57,58,59]. If the internal and external QC is satisfactory, this practice is unnecessary as results in the normal range are accepted without being repeated. Following the results of our study we no longer repeat critical results at our laboratory, unless they fail the delta check [59]. This practice has led to improved TAT and decreased wastage and cost.

Reference Intervals

The use of various analysers and methods means that reference limits and decision limits are not harmonised and results are not comparable between laboratories [22]. This has led to the formation of numerous reference interval projects such as the Nordick Reference Interval Project (NORIP) [60] and similar programs are underway in Australia and New Zealand, [61] Japan [62], South Africa and many other countries. Improvements in harmonization and standardization of assays may lead to common decision limits and improved guidelines to patient care [63].

The Post-Postanalytical Phase

This phase refers to how the clinician responds to the laboratory result. Is the result interpreted correctly by the clinician? Has interpretative comment been provided? [20] Is appropriate treatment initiated? The decrease in pathology training of undergraduate medical training and unfavourable career options has led to a decrease in the amount of staff being trained as laboratory professionals which will have a detrimental effect on laboratory medicine [64]. Additionally, less people are training as medical technologists [14]. Unfortunately this error is difficult to detect and may be to the detriment of the patient.

Point of Care Testing (Poct) Errors

The discussion of errors in POCT is out of the scope of this review, but must be mentioned, as POCT is the fastest growing segment of the current clinical laboratory testing market [14]. Although the use of POCT devices is more convenient with faster results and near patient testing, they are also prone to errors. The lack of quality systems used by laboratories may lead to errors and it is often untrained staff who perform the tests [11]. This results in most errors being in the analytical phase contrary to traditional laboratory errors where this phase has the least errors [14]. However, many preanalytical errors may be unrecognised as there is no process of sample rejection [65]. It is essential that laboratory professionals and clinicians work together closely to ensure the smooth implementation and monitoring of POCT.

Error Detection and the Responsibility of the Laboratory

As described by Plebani et al. “you cannot manage what you cannot measure”. This led to the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) launching a working group in 2008 named “Laboratory Errors and Patient Safety” (LEPS) whose primary goal was to identify quality indicators pertaining to the total testing process [66]. They developed 57 quality indicators which include 35 for the preanalytical phase and 15 for the postanalytical phase [67,68,69]. These allow reliable comparison between laboratories, detection of errors and potentially the implementation of an external quality assurance program [70]. ISO 15189:2012 recommends quality indicators to monitor and evaluate laboratory performance [13]. In a retrospective study conducted in Spain, Salinas et al. evaluated pre- analytical errors over a 10 year period and used a single synthetic preanalytical indicator (expressed as a sigma level) that may be included in the balanced scorecard management system (BSC). The synthetic indicator results summarized overall preanalytical sample errors. They concluded that this was a practical and effective methodology to monitor unsuitable sample preanalytical errors over a period of time which could be used as part of BSC management system [71].

However, there is a lack of external quality assessment (EQA) schemes for the extra-analytical phases of laboratory testing. Although ISO 15189:2012 recommends EQA programs for the entire testing process, including pre-and postanalytical procedures [72], EQA schemes currently focus on the analytical phase. The only official quality assurance system to monitor the extra-analytical phase is The Key Incident Monitoring and Management Systems (KIMMS) project which was initiated by the Quality Assurance Scientific and Education Committee (QASEC) of the Royal College of Pathologists of Australasia. KIMMS is designed to provide pathology practices with the tools for continuous measurement and monitoring of key incident indicators [73]. A recent questionnaire sent out by the ACB-WG-PA found that 91.8% of laboratories in the UK expressed in such a scheme [74].

Kristensen et al. have described methods that can be used for establishing EQA programmes. These methods are of 3 different types: collecting information about pre-analytical laboratory procedures, circulating real samples to collect information regarding interferences that might affect the measurement process or registering actual laboratory errors and relating these to quality indicators. As these three types have different focus and different challenges regarding implementation, it is suggested that a combination of the three is probably necessary to detect the wide range of errors that occur in the preanalytic phase [75].

EBLM provides evidence for best laboratory practice and laboratory audits are essential to monitor if standards being set by EBLM are being adhered to [76, 77]. Numerous working groups have now been established to monitor the extra-analytical phase, such as the Association for Clinical Biochemistry and Laboratory Medicine Preanalytical Working Group ACB-WG-PA) in the United Kingdom.

Conclusion

Recent publications have highlighted the importance of errors in the laboratory and numerous audits have been undertaken to assess the impact of these errors and attempt to implement changes. This concept of risk management attempts to have strategies in place to detect and prevent potential errors. Additionally, the development of standard operating procedures and the adherence to them by staff is also a potential method to reduce errors. Staff and clinician training, such as phlebotomy techniques, is also beneficial. Unfortunately, there is still a lack of harmonization in extra-analytical phases of laboratory testing [11, 78]. Up until recently, accreditation focussed mainly on the analytical phase of laboratory testing. However, it is increasingly being realised that accreditation of laboratories needs to examine all steps of laboratory testing and ISO 15189:2012 now requires that all phases of laboratory testing be evaluated and non-conformances issued if standards are not being adhered to [11]. However, a major problem is that until recently, accepted standards for the extra analytical phases were not available—the quality indicators developed may be the solution to this problem. Audits can then be performed using these quality indicators as acceptable standards against which to compare performance as part of risk management programs. Improvement methodologies such as Six Sigma and Lean Management may then be applied to reduce errors [11, 31, 79]. There has been an international call to harmonize the processes so that common standards will be available for benchmarking and they have been holding regular congresses on pre-analytical errors to highlight this problem [11, 17]. Numerous other strategies have also been employed to decrease errors, such as electronic ordering and automation of many of the labour-intensive preanalytical steps with robotic workstations automated to reduce the number of manual steps in the preanalytical phase [48].