Keywords

Introduction

Anemia, iron deficiency, and iron excess all result in poor health outcomes, which in turn lead to negative economic consequences [1]. Assessing anemia and iron status at the individual and population level can inform policies, programs, and interventions within food and health systems. The information on anemia and iron status is used to develop cost-effective strategies that target the most at-risk populations, improve preventive and treatment-based healthcare, evaluate the impact of interventions, and track prevalence trends.

Accurate, yet simple and inexpensive methods to assess anemia and iron status are critical. This is especially pertinent in low- and middle-income countries (LMIC), where these conditions are highly prevalent and there are greater cost and infrastructure constraints. This chapter focuses on measurement methods for biomarkers to assess anemia and iron status that are suitable for use in clinical laboratories and field settings in low-resource environments (Boxes 3.1 and 3.2).

Box 3.1 Selection of Seminal Reviews and WHO Documents on Methods to Assess Anemia and Iron Status

Biomarkers of Nutrition for Development (BOND)—Iron Review [2].

Measurement and interpretation of hemoglobin concentration in clinical and field settings: A narrative review [3].

WHO Guideline on Use of Ferritin Concentrations to Assess Iron Status in Individual Populations [4].

WHO Haemoglobin concentrations for the diagnosis of anaemia and assessment of severity [5].

WHO Serum transferrin receptor levels for the assessment of iron status and iron deficiency in populations [6].

Box 3.2 Source of Anemia and Iron Status Surveillance Data at the Population Level

The DHS Program serves as the largest global source of hemoglobin data. As of 2022, the protocol for Hb assessment has remained relatively consistent since testing was first introduced into The DHS Program in 1996 to maintain comparability [7]. The protocol consists of testing a single drop of capillary blood in the Hb-201+ device. In rare cases, the Hb-301 device has been used.

While anemia testing has been carried out on a large scale in several surveys, facilitated by field friendly point-of-care Hb testing, iron status surveillance remains less common.

Iron status data is typically collected in a stand-alone micronutrient survey but has also been collected as part of multi-topic, agriculture, nutrition, and other surveys. Better harmonized data collection, processing, and analysis procedures would improve comparability across surveys.

Anemia Assessment

Anemia is a condition where the physiological demand for oxygen in the body is not met which results from impaired production, turnover, loss, or destruction of red blood cells (RBCs). Anemia is often accompanied by a decreased concentration of hemoglobin (Hb), a protein contained in red blood cells that delivers oxygen to the tissues, and changes in RBC morphology.

The following section describes methods for assessing anemia. Hb is the most common hematologic measure used to define anemia. Thus, the greatest attention is given to the point-of-care (POC) hemoglobinometer device which is used to measure Hb concentrations in the blood. While performed less frequently than Hb alone, other RBC parameters are measured to diagnose the type of anemia in a clinical setting. This is typically done on an automated hematology analyzer as part of a complete blood cell count (CBC) along with a peripheral blood smear for morphology. Less sophisticated methods used in clinical practice covered in this section include the WHO Haemoglobin Colour Scale and clinical pallor.

Hemoglobinometers

POC devices for Hb measurement, or hemoglobinometers, are well suited for use in the field and in primary healthcare clinics. They are portable, relatively inexpensive, and easy to use and can be powered by batteries. Use of POC devices in the field setting allows for the immediate return of results to survey participants and the ability to refer participants to a health clinic. In a healthcare setting, the POC devices provide an opportunity to diagnose and make clinical decisions while the patient is still present, which is especially important in LMIC where distance and time can be barriers to accessing care.

Instrumentation and Methodology. The HemoCue® (HemoCue AB, Angelholm, Sweden) model of hemoglobinometers is the most widely used hemoglobinometer. The HemoCue® models currently on the market include the Hb-201+ introduced in 1990, followed by the release of modified versions, the Hb-301 and the Hb-801. The testing principle of the Hb-201+ analyzer differs from the Hb-301 and Hb-801 analyzers.

The measurement of Hb by the Hb-201+ analyzer is based on the hemiglobincyanide (HiCN) method, where potassium cyanide is replaced with sodium azide. Whole blood is mixed with sodium deoxycholate which lyses the RBCs, releasing heme. Heme, in the presence of sodium nitrite, is oxidized to the ferric state to form methemoglobin. Methemoglobin reacts sodium azide to form the stable colored complex, azide methemoglobin which has an absorbance maximum at 570 nm. The azide method is attractive to many users because it obviates the use of cyanide, a highly toxic chemical. Further, the azide methemoglobin complex is stable, and its absorbance maximum is similar to that of hemiglobincyanide. The photometer uses a measuring wavelength of 570 nm and a reference wavelength of 880 nm to compensate for turbidity in the sample. The azide method adheres to the Beer-Lambert Law.

Unlike the Hb-201+ analyzer, the Hb- 301+ analyzer uses filter photometry to measure Hb in whole blood. The principle of the Hb-301+ method is based on scattering measurement of whole blood at the isosbestic point of HHb/HbO2. The concentration of Hb in whole blood at its isosbestic point is independent of oxygen saturation. The analyzer uses a measuring wavelength of 506 nm and a reference wavelength of 880 nm which compensates for turbidity in the sample.

The HemoCue® analyzers are factory calibrated against the HiCN method of the International Council for Standardization in Hematology [8]. The devices need to be sent back to the manufacturer when problems are detected for recalibration. Internal quality control is possible using liquid quality control solutions, although not required according to the manufacturer [9].

Collection Procedures. The hemoglobinometers require approximately 10 μL of peripheral whole blood. If the Hb measurement will be performed elsewhere, or if a larger volume of blood is required to perform additional tests, the blood should be collected into a tube containing an anticoagulant. Collection tubes containing potassium ethylenediaminetetraacetic acid (K2 or K3EDTA) or lithium heparin in solid form should be used. Blood samples should not be collected in tubes containing sodium fluoride as an anticoagulant (HemoCue AB, Angelholm, Sweden). Sodium fluoride is used commonly to inhibit glycolysis in blood samples for glucose measurement but is also known to promote hemolysis [10].

The blood can be loaded into a microcuvette directly from the finger/heel (or tube) or placed on an intermediate surface and then loaded into the microcuvette. The microcuvette must be filled completely in one continuous process and void of air bubbles (Fig. 3.1). The outside of the microcuvette is wiped clean of excess blood and inserted into the photometer. Results are generated within 60 s (or less depending on the model).

Fig. 3.1
A picture of 6 micro cuvettes with blood specimens. The far-right cuvette has proper blood specimens, unlike the far-left which is yellow in colour.

Hb-201+ microcuvettes. Far right is a correctly filled cuvette. Far left is a cuvette with the dry reagent (in yellow) prior to filling the cuvette with the blood specimen. Exposure of the cuvette to heat or moisture results in deterioration of the chemicals in the microcuvette and results in false increases in Hb concentrations. The cuvettes in the middle are examples of incorrect filling of the cavity. Air bubbles lower the concentration of red blood cells in the cuvette resulting in false decreases in Hb concentrations, and incomplete filling does not allow the chemicals in microcuvette to mix properly with the reagents resulting in false decreases in Hb concentrations. © Ellie Brindle

Factors Affecting Hb Measurements. The HemoCue® device is generally considered to provide accurate and precise measurements of Hb concentration when used properly. There are, however, several pre-analytical, analytical, and post-analytical aspects to consider (Table 3.1).

Table 3.1 Factors influencing point-of-care and laboratory-based assessment of hemoglobin and anemia at different stages of testing

The blood source and quality of the blood sample are two notable pre-analytical factors that can impact the Hb concentration. Venous blood is considered the reference sample for hematological measurements. While there are indications that testing Hb concentrations using venous blood gives lower Hb values compared to capillary blood, mean differences have been within acceptable levels (threshold of ±7%) in a more controlled environment [11]. However, poor blood collection technique can lead to a compromised capillary sample (Table 3.1), and inherent variation in Hb concentration between capillary blood drops on the same individual has been observed [12]. In a survey context, substantially lower Hb values and consequently higher anemia estimates have been shown in surveys using capillary blood compared to surveys using venous blood matched by country and time [13].

Analytical considerations associated with the testing method include the performance of the HemoCue® device, model type, and environmental factors (Table 3.1). Comparisons of the HemoCue® device to an automated hematology analyzer (reference method) have generally found an allowable degree of variation (threshold of ±7%) even though the Hb-301 and 201+ devices tend to give higher Hb values [11]. In addition, the Hb-201+ device tends to read slightly lower than the Hb-301 device [3]. Use of the same model in all surveys is preferable, especially between surveys in the same country, although the extent to which these differences will impact anemia trends is likely minimal.

The performance of the HemoCue® hemoglobinometers is influenced by temperature and humidity and the integrity of the microcuvettes used for blood sampling. For optimal performance, the recommended storage and operating temperature of the Hb-201+ hemoglobinometer is 15–35 °C and for the Hb-301 and Hb-801, 10–40 °C. The active reagents in the Hb-201+ microcuvettes are sensitive to high temperature and high humidity, making them prone to degradation once a canister of microcuvettes is opened. Using degraded microcuvettes (reagents) can lead to higher Hb concentrations. The impact of the environment on the Hb-201+ microcuvette function can potentially be mitigated by limiting exposure of the device and microcuvettes to temperatures that fall outside the operating range and high humidity [14]. The Hb-301 and Hb-801 models use reagent-free cuvettes, offering greater stability in extreme weather conditions, but have been shown to result in artificial increases in Hb concentrations if the Hb readings are not performed within 20–30 s [11, 14].

At the postanalytical phase, transcription of results is usually recorded by the technician, although there is the potential to digitally transmit the results from the measurement device over a Bluetooth bridge. To define anemia, WHO recommended cutoffs are applied to Hb concentrations that are dependent on age, sex, and pregnancy status [5]. Prior to applying cutoffs, hypoxic conditions are accounted for by adjusting for altitude in populations living at high altitudes and for smoking status. There is also some evidence that Hb concentrations vary by ethnicity, but adjustments for this have not been universally adopted [15]. Additional information on the definition of anemia is provided in Chap. 4.

Automated Hematology Analyzer

The automated hematology analyzer is the most common method used in clinical laboratories (Table 3.1). In addition to Hb, the analyzer provides measures that are helpful in the evaluation of anemia for clinical purposes. These parameters, collectively known as a CBC, include hematocrit, mean corpuscular Hb, mean cell volume, RBC count, red cell distribution width, and reticulocyte count. Information on the interpretation of these biochemical indicators can be found in Lynch et al. and Karakochuk et al. [2, 3].

Blood is aspirated directly from a sample tube and mechanically diluted in an RBC lysis buffer and the concentration of Hb measured colorimetrically. Most instruments operate using the Coulter Principle whereby particles and cells suspended in fluid are forced through a small aperture between electrodes and the change in electrical impedance is measured. The instrument may be paired with a system for automatically capturing clinical data, or it may be necessary to manually enter results into a database.

The automated hematology analyzer is not ideal for use in primary and secondary level healthcare laboratories because operating the instrument is not cost-effective for laboratories with small sample loads. Further, operating these instruments requires a constant source of electricity, routine maintenance and calibration, and the resources to procure required reagents and supplies. Countries are often constrained by the cost of purchasing and maintaining equipment and the availability of experienced operators; investments to foster improvements in laboratory medicine are needed [16].

WHO Haemoglobin Colour Scale

The WHO Haemoglobin Colour Scale (HCS) is a semi-quantitative method that measures the Hb concentration in whole blood by comparing the color of the blood samples to a color on the scale with a known Hb value (Table 3.1). Its intended use is to diagnose anemia in primary healthcare settings that lack the capacity or resources to measure Hb using an automated system or a POC device. To determine the Hb concentration using the Colour Scale, a drop of blood is placed on chromatography filter paper and its color compared against shades of red on the Colour Scale that correspond to different Hb levels (i.e., 4, 6, 8, 10, 12, and 14 g/dL) [17]. The quality of the blood spot, ambient lighting, and timing of readings can impact the test results, but overall it is found to be simple to administer [18]. The HCS performs better than clinical assessment alone, but a cost-benefit analysis of the added advantage of the HCS in improving clinical care would better inform its use [19].

Clinical Pallor

A physical examination can be used to detect anemia by assessing the pallor of the skin or mucous membrane (Table 3.1). Sites where capillary blood vessels are close to the surface are examined. The palms, conjunctiva, and nail beds are the most frequent sites and are appropriate regardless of skin pigmentation.

The sensitivity and specificity of clinical pallor for diagnosing anemia vary across studies but overall appear to have limited ability to detect anemia. This applies in particular to milder cases of anemia and to settings where anemia is highly prevalent [20]. These findings are based on a small number of studies mostly in Africa; thus, the extent to which clinical pallor can support health management decisions requires further exploration.

Use of clinical pallor is imperfect but is a common anemia assessment approach. The WHO-UNICEF Integrated Management of Childhood Illness (IMCI) guidelines, a cornerstone of pediatric care, include use of palmar (palm of the hand) pallor to check for anemia and provide treatment and, in severe cases, referral to a hospital. Clinical pallor will likely continue to be an assessment approach used in healthcare settings when hematologic measures are not available.

Biochemical Iron Indicators

Iron status assessment generally aims to measure the body’s iron stores. Stainable iron in bone marrow is the best measure of iron reserves, but is too invasive for broad use and is mainly limited to clinical settings. Ferritin and soluble transferrin receptor (sTfR) are the most widely used biochemical indicators in surveys that measure iron status. These biomarkers capture different facets of iron biology, and indices using the ratio of sTfR to ferritin have also been used [2, 21, 22]. Threshold values for identifying iron deficiency using either the individual biomarkers or their combination are dependent on the assays used, and the influence of inflammation and infection on the biomarkers should be considered whether they are used individually or in construction of the indices (Boxes 3.3 and 3.4). Regulation of iron absorption may also be informative and can be assessed by measuring hepcidin. Strengths and limitations of these commonly used indicators, and the laboratory methods for measuring them, are discussed in greater detail below (Table 3.2). There are several other circulating iron biomarkers, notably serum iron and its carrier protein transferrin, which can be informative, but variability in their levels across the day limits their use in surveillance of iron status. A comprehensive discussion of these and other biochemical indicators of iron status not considered in detail here can be found in Lynch et al. (2018) [2].

Table 3.2 Factors influencing laboratory-based assessment of iron status at different stages of testing

Blood Specimen Collection

The type of blood specimen collected is determined by both the site of the collection (e.g., from the arm, the finger, or the heel) and the vessel into which it is collected. Venipuncture blood samples are the biologic specimen of choice to assess iron status, and venous collection is often required to have sufficient blood to test the analytes. While even the most restrictive guidelines for safety allow maximum volumes of approximately 1 mL of blood per kg of body weight to be drawn for research [23, 24], collecting 5 mL of whole blood yields approximately 2 mL of serum or plasma, which is enough for multiple laboratory tests. Although in most cases certified phlebotomists can perform venous draws relatively easily and safely, well-trained and experienced phlebotomists are needed for infants and young children because their veins are not as well developed in the antecubital areas. An alternative blood source is capillary blood sampling using a lancet to create a small incision in either the finger or, for infants less than 12 months of age, in the heel and collecting blood into capillary tubes. Capillary blood collection has its own challenges including being more painful than venous collection and requiring careful training. Phlebotomists are less likely to be familiar with capillary blood collection, and poor technique can significantly influence sample quality. Improperly collected capillary blood is more likely than venous blood to be compromised by hemolysis or mixing with interstitial fluid during the collection process. The volume of blood obtained is also significantly smaller than for venous blood: capillary whole blood volume collected is generally 0.25–0.5 mL, and the resulting recovered serum or plasma volumes after separating blood components are approximately 0.125–0.25 mL.

Phlebotomy supplies are relatively inexpensive and widely available. Venous blood can be collected into tubes with or without additives that speed clotting, prevent clotting, or aid in separation of the blood components. The type of blood collection tube required is determined by the specific laboratory assay protocols to be used. Often, either serum or plasma is acceptable, although serum is generally the sample of choice for biochemical analyses, as it is devoid of anticoagulants, cellular material, and most clotting proteins which have potential to interfere with assays. EDTA, a common anticoagulant used for tests that require whole blood (i.e., hematology) and in plasma preparation, has been found to interfere with some sTfR assays [25].

Blood Specimen Processing and Storage

Blood specimen processing and storage require equipment and can pose logistical challenges for field collection. After collection, the blood specimens must be centrifuged to separate the components of the blood, and the serum or plasma fractions must be removed from the collection tubes into storage tubes using a pipette. The fractionation must be done within a specific window of time dictated by the type of collection tube used and the laboratory protocols (ideally within hours of collection).

A power source is needed for processing samples into blood components, and an unbroken cold chain is required from the point of collection through transportation to the lab. Refrigeration at 2–10 °C is required before blood fractions have been separated to prevent hemolysis. Once the samples have been processed, they are stored frozen (generally −20 °C) while still in the field. Electronic temperature monitoring devices, electronic data loggers, and digital thermometers can be used to monitor and record temperatures. Once samples reach the central laboratory, they are stored frozen (ideally −70 °C or colder) until analysis or shipment. Freeze-thaw cycles should be kept to a minimum. While portable power sources for operating centrifuges and portable means of refrigeration and freezing are available, these are heavy and require regular recharging and thus present difficulties for blood collection in remote field locations.

Where the difficulties of processing whole blood and storing serum or plasma are barriers to data collection, dried blood spots (DBS) may be a viable alternative [26]. DBS are prepared from a finger or heel prick by collecting free-flowing blood drops onto a special filter paper card. The samples do not require processing in the field, and while there may still be a need for a cold chain, storage temperature requirements are more permissive. Despite the ease of collection, the use of DBS in biochemical assays has its own limitations, and they have only been used rarely to measure sTfR. DBS ferritin is not interpretable because the sample contains a mixture of ferritin from serum and from the red blood cells. It has not yet been determined whether hepcidin measurement in DBS may offer a reliable alternative.

Biomarkers of Iron Status

Ferritin. Ferritin is an iron-binding protein that is used to assess iron stores and is one of the most widely used biomarkers [4]. However, ferritin is an acute-phase protein that increases with inflammation, so it must be measured in conjunction with inflammation biomarkers. Ferritin is present in relatively low concentrations in serum, and assays used in identifying iron deficiency must have adequate sensitivity to accurately quantify the physiologic range observed in individuals with low body iron stores. Serum or plasma may be assayed without dilution, which increases the volume of sample required for testing. The relatively large sample volume and the sensitivity of serum ferritin assays to hemolysis can be obstacles to the use of capillary collected blood for ferritin measurement.

Ferritin is measured using a variety of immunoassay methods, including both manual ELISA that require relatively basic laboratory equipment (Fig. 3.2) and more expensive clinical autoanalyzer methods. In micronutrient surveys, a few notable methods have been used more than others. Many have relied upon an assay described by Erhardt et al. used for testing in the VitMin Laboratory, but not currently in use elsewhere [27]. A commonly used manual ELISA for ferritin (Ramco Spectro Ferritin, Webster, TX) is no longer available. Other notable methods for ferritin measurement rely upon clinical laboratory equipment, including the ferritin immunoturbidimetry assay for the Roche Cobas 6000 system used by the US CDC [28]. The WHO considers immunoassay methods acceptable if the assays are calibrated against the WHO international reference material [29]. Application of common threshold values for determining iron deficiency should be approached with caution, and assay-specific reference ranges or harmonization may be required [30] (see Reference Materials and Harmonization below).

Fig. 3.2
A photograph has tables, with a microtiter plate reader. A person is sitting facing backwards in the background.

Major equipment required for manual ELISA; a microtiter plate reader for quantifying colorimetric signal intensity (left) and a microtiter plate washer (right)

Soluble transferrin receptor. Soluble transferrin receptor (sTfR) circulates in the bloodstream at concentrations that vary depending on erythropoietic activity, which can be reflective of the body’s demand for iron [6]. Increased sTfR indicates that the body iron stores are inadequate to meet demand and has been shown to correlate to stainable iron in bone marrow [31], which is the site of primary body iron demand for utilization in erythroid maturation. An advantage of sTfR is that concentrations may not be directly affected by inflammation, but sTfR has still been found to be weakly associated with inflammation biomarkers and is also impacted by other factors that cause erythropoiesis [22, 32].

Like for ferritin, measurement of sTfR is done using immunoassays, either in the form of clinical analyzers or manual ELISA. Also, like ferritin, sTfR has commonly been measured in the VitMin lab using the Erhardt et al. method [27]. Manual ELISAs have been supplied by a few manufacturers, and these have used different forms of native sTfR in kit standards, causing some significant discrepancies in calibration. For example, the material supplied for calibration in the Ramco sTfR ELISA kit (now discontinued) used sTfR of placental origin, which reacts differently in ELISA than the sTfR calibration material derived from serum that was used in the R&D Systems ELISA kit [33]. Therefore, as described in Box 3.4, the use of an assay needs to be considered carefully before applying sTfR cutoffs and a conversion factor applied to account for use of different methods [34].

A WHO reference material is available for sTfR, but has poor commutability, meaning it reacts differently depending on the assay used. The issue of commutability of reference materials, discussed in more detail below, is of concern generally for biochemical tests, but sTfR assays seem to be particularly susceptible to these problems, and this has led to poor comparability between studies that have measured sTfR.

Hepcidin. Hepcidin is a hormone that plays an important role in regulating iron homeostasis; its expression is induced by inflammatory cytokines in response to iron or infection and is reduced in response to erythropoiesis, anemia, and hypoxia. Hepcidin shows promise as an iron biomarker as it regulates the absorption of iron and thereby regulates the body’s iron stores [2]. Increasing hepcidin levels reduce absorption of iron, indicating either adequate iron stores or a response to inflammation [35]. While hepcidin is raised in an inflammatory state, it is not adjusted for inflammation because its intended use is to better understand iron absorption and mobilization. Hepcidin can be measured by time-of-flight mass spectrometry, but immunoassay, either using a clinical analyzer or manual ELISA kit, is a more readily available and affordable method. To date, a major limitation is the lack of a standardized assay or international reference materials. However, significant progress has recently been made toward harmonizing hepcidin assays [36, 37].

Box 3.3 Assessment of Other Biomarkers to Aid in the Interpretation of Iron Biomarkers from a Laboratory Perspective

Biomarkers of inflammation are measured in conjunction with iron status biomarkers in populations and settings with high levels of inflammation or infections to support the interpretation of ferritin and sTfR. The most common inflammatory biomarkers measured are C-reactive protein (CRP) and α 1-acid glycoprotein (AGP) to capture acute and chronic inflammation, respectively. Both CRP and AGP are measured using immunoassay, either with manual ELISA or clinical analyzer assays, and comparability is generally good across assays. International reference materials are available for both analytes, and the blood sample volumes required are relatively small. CRP assays may be described as “high sensitivity” or hsCRP, but in the laboratory, this distinction is somewhat arbitrary and reflects the interest in chronic, slight elevation of CRP as an indicator of cardiovascular disease risk.

In a clinical context, a higher ferritin cutoff can be used for individuals with inflammation [4]. At the population level, those with inflammation can be excluded or an arithmetic or regression adjustment applied to ferritin and sTfR [4, 32]. Use of the regression correction approach has the advantage of adjusting the iron biomarkers across the full physiological range of inflammation [38]. From a method perspective, this means accurate and precise CRP and AGP values across this range are needed, including at low concentrations, which has been an issue for some assays [39].

Adjustments to sTfR for the presence of malaria have also been proposed using microscopy and/or rapid diagnostic tests to diagnose malaria status [32, 40]. The added value of measuring malaria for adjustment purposes is likely minimal given that after adjusting for AGP the influence of malaria on prevalence estimates has been insignificant.

Additional information on adjustments to iron biomarkers can be found in Chap. 5.

Standard Reference Materials

Reference materials are prepared with the support of the WHO and other organizing bodies for use in the validation and standardization of new assays, to assess inter-laboratories’ performance, and for periodic recalibration of assays which are increasingly recognized as important if the reagent of equipment for a commercial assay has changed since initially being calibrated. Reference materials are typically assigned a value by measurement using a method recognized as the gold standard for the specific analyte. In the absence of a clear gold standard, a value is assigned based on a consensus process in which a material is measured repeatedly by a number of methods in a number of laboratories (see Thorpe et al. (1997) for an example of this process [41]). There can be considerable variation in the measured values used to assign a concentration to the reference material. The National Institute for Biological Standards and Control (NIBSC) has issued a WHO International Standard for Hb (98/708) and ferritin (94/572) and a WHO reference reagent for sTfR (07/202) to assist laboratories in validating analytical measurements [42]. Reference materials that allow harmonization of hepcidin assays have also been developed [36, 37, 43]. The supply and lifespan of reference materials are finite, and they are periodically replaced with new materials with different concentration values and sometimes made by different means. These production source differences can affect the potency of the reference material in immunoassays. For example, native ferritin from liver, spleen, and serum may react differently in immunoassays, and marked differences have been noted between sTfR from serum and placental sources [33]. The current ferritin reference material, 94/572, a recombinant protein produced in E. coli, is over two decades old and is now in short supply, and the quantity that can be purchased is restricted by the distributor (NIBSC).

Box 3.4 Cutoffs for Iron Status from a Laboratory Perspective

Evidence-based cutoffs for both deficiency and excesses have been established for iron status and are described in Chap. 4. The assays used should be reliable and valid across the full physiological range, but assay optimization should prioritize precision and accuracy of the concentration range closest to the cutoff value.

Before applying a cutoff value to categorize sample results as indicating deficiency, sufficiency, or excess, it is essential to consider the calibration of the assay being used for measuring the biomarker concentrations in the samples relative to the calibration of the assay used to establish the cutoff values. In some cases, the supplier of assay materials will provide information about expected ranges for their particular assay. This information should be included, along with specifications of the assay used, any time assay results are reported.

Laboratory Capacity Strengthening

The costs of establishing and maintaining a well-equipped and staffed laboratory to perform the range of biochemical analyses required to measure biomarkers for assessing iron status are significant limitations to the measurement of micronutrient biomarkers in LMICs. Surveys are typically carried out on an intermittent basis, making it especially difficult to meet the criteria used by global organizations for selecting an in-country lab to perform the analyses. Even in settings where clinical iron biochemical assessment is performed, this does not often translate into the ability to store, process, analyze, and capture the data results at the scale needed for a national survey. The absence of national laboratories with the demonstrated ability to analyze iron biomarkers means it is often necessary to ship specimens out of the country. Strengthening of laboratories is clearly needed and requires increased investments in human resources and training, building the necessary infrastructure, and the establishment of and participation in national and international accreditation programs [16]. Instituting regional micronutrient resource laboratories is a potential interim solution, but in the longer term, simpler iron assessment methods that meet national and international standards are necessary.

Future Directions

Variations in Hb Concentrations and the Impact on Anemia Estimates in Field Settings

The notion of differences in Hb concentrations between venous and capillary blood is not new, but further research is needed to identify cause(s) of these differences in the field context. The extent to which drop-to-drop variability impacts Hb concentrations and the extent to which recruitment of highly skilled field staff, rigorous training, and field monitoring can overcome the constraints and challenges encountered in a field environment are unclear. A potential solution is to collect capillary blood in microcontainers containing an anticoagulant, most often EDTA, but research on whether this overcomes variability issues needs to be done.

Innovations in Blood Processing

Methods that separate blood, either from capillary or venous collection, in a device that does not require centrifugation and that can stabilize field-collected samples for transport to a laboratory would be an important step toward large-scale iron assessment. Several devices have recently been offered for sale or described in experimental evaluations. These devices separate plasma either by a lateral flow across a paper substrate or by applying blood to a filter that retains the cellular components while trapping the plasma fraction in an absorbent material [44, 45]. Some of the filter devices have the advantage of allowing the use of the red blood cells for other analyses, for example, measurement of RBC folate. Barriers to wide use of these devices include limited availability, high cost, and a lack of complete vetting through field tests.

Point of Care

Noninvasive POC technologies to quantify Hb concentrations and/or diagnose anemia, such as photography of the conjunctiva and pulse oximeters, as well as POC automated analyzers that measure CBC, hold promise, but more research is needed before being put into practice [46,47,48]. The assessment of iron status at point of specimen collection is currently in the proof-of-concept phase. The use of these methods is especially relevant for iron because of their potential use in determining when the administration of an iron intervention would be most effective and to target iron interventions to those with diagnosed iron deficiency rather than using a blanket approach which has safety implications.

Multiplex Assays

Iron status is best assessed by the use of more than one iron biomarker in conjunction with markers of inflammation. Using conventional laboratory testing approaches requires that each specimen be tested multiple times. This makes multiplex assays, in which multiple analytes are measured simultaneously, particularly useful. Erhardt et al. described an initial step toward multiplexing in which assays for ferritin, sTfR, CRP, and AGP along with retinol-binding protein for vitamin A status assessment are combined into a system that maximizes efficiency [27] and requires a small quantity of blood (25 μL for duplicate measurement [49]). That method is in use only in the VitMin Lab (Willstaett, Germany). Since the publication of that combined approach, multiplex assays which measure all the analytes in a single well have been developed. The multiplex platforms still rely upon the principles of immunoassays, with an antibody-antigen reaction remaining at the heart of the technique, but the reaction occurs on either coated microscopic beads or planar arrays that allow the assays to detect discrete signals for multiple analytes tested in a single assay. These techniques allow measurement of multiple analytes from very small volumes of blood, such as those obtained from capillary collection, and reduce the labor costs, time, and additional laboratory consumables required [50].

Improvements in Assay Harmonization

Subtle differences in assay reactivity with protein forms in reference materials are not necessarily mirrored in endogenous forms of ferritin, sTfR, and hepcidin; thus, traceability to the WHO reference material does not guarantee comparable results between assays [51]. The past decade has seen an increasing recognition that reference materials should be commutable, meaning that that two assays calibrated using the same reference material should provide agreement between methods in their measurement of clinical samples [52]. The ideal solution would be improvement of reference materials to ensure commutability (i.e., equal reactivity across assays), but failing that, harmonization (i.e., mathematical adjustment of the concentration values to achieve agreement between methods) using donor panels can be used as an interim measure [36, 53]. Neither approach is currently in routine use, making application of cutoff values for identifying deficiency a challenge.

Conclusion

Iron deficiency, which is the leading nutritional cause of anemia, affects billions of people globally. Despite some pre-analytical and analytical challenges, the assessment of anemia is relatively simple to conduct. Thus, both clinical diagnosis and surveillance of anemia are more common than that of iron status in LMICs. However, anemia is caused by multiple factors and is neither a specific nor sensitive measure of iron deficiency. The lack of accurate and widely available data on iron status is a major impediment to designing policies and programs to address the underlying causes of anemia. This data gap should continue to be addressed using laboratory methods that are already available while also exploring the development of and improvements in noninvasive methods, point-of-care technology, multiplex assays, and assay harmonization.