Medical diagnostic laboratories use various methodologies to measure sodium (Na) in the blood. The purpose of this work is to identify the advantages and disadvantages of ion chromatographic, spectrophotometric, titrimetric, and gravimetric methods, and to determine the extent of the use of each of these methods in the current clinical diagnostics practice.

Ion chromatographic methods of determining Na in blood serum appeared in the mid-1970s. In the specified methods serum is fi ltered and diluted 20–100-fold. Sometimes, it is pre-oxidized. The specified volume of diluted serum (10–100 μl) is injected into a flow of eluent. A solution of phenylenediamine dihydrochloride or another compound in hydrochloric or nitric acid can serve as the eluent. Then the eluent is run through two sequentially connected ion exchange columns. In the first separation column filled with cation exchange resin (H+-form), the following reaction takes place:

$$ {\mathrm{Na}}^{+}+{\mathrm{H}}_2\mathrm{O}\to {\mathrm{Na}}^{+}{\mathrm{OH}}^{\hbox{--} }+{\mathrm{H}}^{+}, $$

which causes the separation of Na+ ions from other ions present in the serum. In the second “suppression column” loaded with anion exchange resin (OH-form), one of the following reactions takes place:

$$ \begin{array}{l}\kern2.5em {\mathrm{Na}}^{+}{\mathrm{OH}}^{-} + {\mathrm{H}}^{+}{\mathrm{Cl}}^{-}\to\ {\mathrm{Na}}^{+}{\mathrm{Cl}}^{-} + {\mathrm{H}}_2\mathrm{O};\hfill \\ {}\kern1em {\mathrm{Na}}^{+}{\mathrm{OH}}^{-} + {\mathrm{H}}^{+}{\mathrm{NO}}_3^{-}\to {\mathrm{Na}}^{+}{\mathrm{NO}}_3^{-} + {\mathrm{H}}_2\mathrm{O};\hfill \\ {}2{\mathrm{Na}}^{+}{\mathrm{OH}}^{-} + PDA\cdot 2\left({\mathrm{H}}^{+}{\mathrm{Cl}}^{-}\right)\ \to 2{\mathrm{Na}}^{+}{\mathrm{Cl}}^{-} + PDA + 2{\mathrm{H}}_2\mathrm{O},\hfill \end{array} $$

where PDA·2(H+Cl) is phenylenediamine dihydrochloride, and PDA is phenylenediamine.

Every reaction is accompanied by a decrease in electrical conductivity of the eluent flow. In some cases, only the separation column is used. After running the eluent flow through the columns, a conductivity meter is used to measure electrical conductivity. Changes in the output detector signal observed with time are shown by the sequence of peaks. The time corresponding to the peak maximum (retention time) points to the identity of the separated ion, and the area of under the peak and its height quantify the amount of the ion in blood serum. Depending on the conditions of analysis (the type of eluent, its concentration and flow rate, the beads used to fill the column, etc.), Na+ ion retention time is 2–12 min.

The following methods are examples of using the ion chromatography method to determine Na concentration in blood serum.

In the fi rst method, to determine Na up to the concentration of 150 mM the serum is diluted 40-fold. Then, 100 μl of diluted serum is injected into the ion chromatograph (with a conductivity detector) equipped with a separation column 9×250 mm Chromex DCS-X2-55 and a suppression column 6×250 mm Chromex DA-X10-55. Phenylenediamine dihydrochloride solution (1 mM) was the eluent. Eluent flow rate was 115 ml/h. Na+, K+, and NH +4 ions register as a single peak with a retention time of 4 min. Retention time for Mg2+ ions was 7 min, and for Ca2+ ions – 15 min [1].

In the second method, blood serum diluted 40-fold was introduced into the ion chromatograph equipped with a 6×500 mm Chromex DCS-X2-55 separation column and a 6×500 mm Chromex DA-X10-55 suppression column. Hydrochloric acid solution (5 mM) served as the eluent. Eluent flow rate was 138 ml/h. Retention time for Na+ ions was 12 min, for NH +4 ions – 17 min, and for K+ ions – 19.5 min [1].

In the third method, blood serum was fi ltered and diluted 20-fold. After this, 10 μl of the diluted serum was injected into the ionic chromatograph equipped with a separation column loaded with cation exchange resin. A solution of nitric acid (6.3 mM) was diluted with the eluent. Eluent flow rate was 1 ml/min. Analysis was carried out at room temperature. Retention time for Na+ ions was 2 min, and for K+ ions – 4 min. The standard deviation (STD) of the relative measurement error in the determination of Na concentration did not exceed 2.13% (for 10 repeat measurements of the same serum sample). The results of measuring Na in 110 serum samples based on this and the potentiometric methodologies were compared (correlation coefficient was about 0.52) [2].

In the 4th and 5th methods of determining Na, the serum was pre-oxidized, fi ltered, and diluted 100-fold. The diluted serum in the 4th method was introduced into an ion chromatograph equipped with a 9×250 mm separation column in a complex with a suppression column, and in the 5th method, into a chromatograph equipped only with a 3×100 mm separation column. A solution of nitric acid (5 mM) was the eluent in both methods. Retention time of Na+ ions according to the 4th method was 7.2 min, and according to the 5th – 3.0 min. Potassium found in the serum did not have an effect on measurement results since retention time of K+ ions according to the 4th method was 9.6 min, and the 5th method – 6.25 min. Comparison of the Na measurement results using these methods showed that the methods yielded similar results, 133.9 and 139.1 mM [3].

From the given description, it follows that ion chromatography methods of determining Na concentration in blood serum can yield high measurement selectivity when the conditions of analysis are carefully selected; however, productivity in this case is 3–15 analyses per hour, which is significantly lower than the productivity of flame photometric and potentiometric methods. Besides, the instrumentation can be considered pricey, and its safe exploitation is directly tied to daily maintenance. These circumstances prevent widespread used of ion chromatography in clinical diagnostic laboratories that carry out bulk analyses, and the listed methods are mainly used for biomedical studies in large research centers.

Spectrophotometric methods of determining Na in plasma and blood serum have been known since the 1950s. The specified methods are based on the principle of measuring the intensity of monochromatic radiation, which decreases after passing through a solution or coating containing the chemical compound that interacts with Na:

$$ {C}_1 = f\left({C}_{\mathrm{Na}}\right), $$

where C Na is the concentration of Na in the solution, and C 1 is the concentration of the formed compound.

In these measurements, the intensity of the monochromatic beam J λs passing through the solution depends on the concentration of the specified compound C 1 and obeys the Beer–Bouguer–Lambert law:

$$ {J}_{\lambda \mathrm{s}}={J}_{0\lambda \mathrm{s}} \exp \left(-{D}_{\mathrm{s}}\right)={J}_{0\lambda \mathrm{s}} \exp \left[-1\left({\upvarepsilon}_{\lambda 1}{C}_1+{\displaystyle \sum_j{\upvarepsilon}_{\lambda j}{C}_j}\right)\right], $$

where J 0λs is the intensity of the monochromatic beam prior to passing through the solution; D s is the optical density (absorption) of the solution; ελ1 and ελj are extinction (absorption) coefficients of the monochromatic radiation by the compound forming upon interaction with Na and other j-components of the solution, respectively; C j is the concentration of the j-component; and l is optical path length of the monochromatic beam in solution.

In measurements in a coating that is in contact with the analyzed solution, the dependence of the intensity of the passing monochromatic beam J λc on the concentration C Na can be approximated by

$$ {J}_{\lambda c}={J}_{0\lambda c} \exp \left(-{D}_c\right)={J}_{0\lambda c} \exp \left\{-\left[a+b1\mathrm{n}\left({C}_{\mathrm{Na}}+{\displaystyle \sum_j{k}_{j:\mathrm{N}\mathrm{a}}{C}_j}\right)\right]\right\}, $$
(1)

where J 0λc is the intensity of monochromatic radiation prior to passing through the coating; D c is absorption of the coating; a, b are empirical coefficients; k j:Na is the selectivity coeffi cient of the j-component in the analyzed solution relative to Na.

Spectrophotometric methods of determining Na in blood can be divided into enzymatic and non-enzymatic. Non-enzymatic methods that appeared in the 1950s were based on reactions of formation of sodium-zinc-uranyl acetate or sodium-magnesium-uranyl acetate, and on reactions of formation of sodium nitroso barbiturate. Error in determining Na concentration using these methods is no more than 4% [4, 5].

Non-enzymatic methods known since the end of the 1980s are based on the reactions of Na with macrocyclic compounds. In 1988, a method was proposed where the concentration of Na in plasma and blood serum was determined using a macrocyclic compound with a cavity roughly the size of a Na+ ion that would ensure high selectivity of measurements. To 4 ml of plasma or serum were added 390 μl of a reagent commercially known as ChromoLyte, which included a buffer solution (pH 7.5), stabilizer, surfactant, and a macrocyclic compound. The mixture was incubated for 4 min at 37°C, after which the change in its absorption was measured using radiation with a wavelength of 500 nm. The change in absorption is linearly proportional to the concentration of Na in the range of 80–170 mM. The standard deviation of the relative error was 2.1, 1.6, and 1.5%, respectively, for Na concentrations of 116, 136, and 152 mM. The presence of Ca (0.9 mM), Mg (2.6 mM), Fe (4.3), pyruvate (2.3 mM), lactate (2.6 mM), salicylate (1.9 mM), acetaminophen (0.3 mM), glucose (56 mM), urea (8.3 mM), creatinine (3.1 mM), ascorbic (0.3 mM) and uric (1.2 mM) acids, ethyl alcohol (33 mM), bilirubin, and lipids did not affect the results. The presence of K (up to 10 mM) caused a less than 1 mM decrease in the determination of Na (140 mM). The measurement results in 108 samples of serum using the suggested and potentiometric methods were compared (correlation coefficient – 0.9881) [6].

In 1991, two methods of determining Na in blood plasma were developed using different optodes (coating – analyzed solution – coating). 250 μl of plasma were diluted 20-fold with a solution of Mg(CH3COO)2, (pH 4.9) and introduced into a flow-through cuvette of a spectrophotometer. The cuvette contained the measuring and reference optodes. In the fi rst method, the coating on the measuring electrode consisted of sodium tetrakis(3,5-bis(trifl uoromethyl)phenyl)borate, Na-ionophore V (4-octadecanoyloxymethyl-N,N,N’,N’-tetracyclohexyl-1,2-phenylenedioxydiacetamide), polyvinylchloride, bis(1-butylpentyl) adipate, and chromoionophore II (9-dimethylamine-5-(4-(16-butyl-2,14-dioxo-3,15-dioxa eicosyl)phenylimino)benzo(a) phenoxazine); and in the second, of sodium tetrakis(3,5-bis(trifl uoromethyl)phenyl)borate, Na-ionophore V, polyvinylchloride, bis(1-butylpentyl)adipate, and chromoionophore III (9-diethylamine-5-((2-octyldecyl)imino)benzo(a)phenoxazine). The coating on the reference optode consisted of sodium tetrakis(3,5-bis(trifl uoromethyl)phenyl)borate, Na-ionophore V, polyvinylchloride, and bis(1-butylpentyl)adipate. The coating thickness on all the optodes was 4 μm. Measurements were carried out using radiation with a wavelength of 650 nm. Spectrophotometer signal ΔD c was linearly dependent on the log of Na concentration in blood plasma (similar to (1)):

$$ \varDelta {D}_c={D}_{1c}-{D}_{0c}=a+b1\mathrm{n}\left({C}_{\mathrm{Na}}+{\displaystyle \sum_j{k}_{j:\mathrm{N}\mathrm{a}}{C}_j}\right), $$

where D 1c and D 0c represent the optical density (absorption) of the measuring and the reference optodes, respectively.

The standard deviation of the relative error of measurements of Na in the control samples (121, 135 mM) did not exceed 2%. Selectivity coefficients were evaluated at the level: k K:Na ≈ 0.06, k Li:Na ≈ 0.08, k Ca:Na ≈ 0.06, k Mg:Na ≈ 0.003 (the fi rst method) and k K:Na ≈ 0.06, k Li:Na ≈ 0.08, k Ca:Na ≈ 0.04, k Mg:Na ≈ 0.001 (the second method). The presence of bilirubin, hemoglobin, and lipids did not have an effect. Analysis time did not exceed 30 sec. Measurement results of 20 plasma samples obtained using these methods were compared with the results of flame photometric and potentiometric methods. Correlation coefficients were 0.988 and 0.989, respectively. Upon contact with plasma, the characteristics of the coatings changed, in particular, under the effect of plasma for 10 hours sensitivity (coeffi cient b) decreased by 11% [7].

Enzymatic spectrophotometric methods of determining Na in blood serum appeared around 1980–1990s. They are based on the hydrolysis of o-nitrophenyl-β-D-galactopyranoside (NPGP) under catalytic action of the enzyme β-galactosidase (EC number 3.2.1.23):

$$ \mathrm{NPGP}+{\mathrm{H}}_2{\mathrm{O}}^{\underrightarrow{\upbeta \hbox{-} \mathrm{galactosidase}}}o\hbox{-} \mathrm{nitrophenol}+\mathrm{galactose}. $$
(2)

Below are a few examples of the enzymatic method.

In 1988 was proposed a method of determining Na in blood serum using a macrocyclic compound (cryptand Kryptofi x 221 – 4,7,13,16,21-pentaoxa-1,10-diazabicyclo-8,8,5-tricosane) and the enzyme β-galactosidase. First, Na+ ions were partially bound with the cryptand, and then the remaining Na+ ions activated the enzyme β-galactosidase that catalyzed the hydrolysis of o-nitrophenyl-β-D-galactopyranoside (reaction (2)). To implement this method, 10 μl of serum were diluted 5-fold, then was added 200 μl of a solution containing tris(hydroxymethyl)aminomethane-buffer (300 mM; pH 8.7), Kryptofi x 221 (4.4 mM), DL- dithiothreitol (4 mM), MgSO4 (7.5 mM), LiCl (16 mM), lithium 1,2-bis(2-aminophenoxy)ethane-N,N,N’,N’-tetraacetate (0.33 mM), albumin (0.67 g/liter), and the enzyme β-galactosidase (750 IU/liter). The formed mixture was incubated at 37°C for 1.5 min. Then, 40 μl of distilled water were introduced along with 10 μl of o-nitrophenyl-β-D-galactopyranoside (1.5 mM), and absorption of the solution was measured every 5 sec for 2.5 min using radiation with a wavelength of 420 nm (cuvette path length – 12 mm). Absorption was linearly dependent on the concentration of Na in the serum in the range of 110 to 160 mM. The introduction of cryptand into the reaction mixture ensured linearity of the specified dependence, and together with the introduction of Mg2+ ions it lead to an increase in sensitivity of measurements. The average result of Na measurements in the control sample (134.1 mM) was 133.3 mM. The average standard deviation of the relative error was 0.83 and 0.89% for Na concentrations of 123.7 and 149.2 mM, respectively. Hemoglobin in the serum (0.5 g/liter), as well as bilirubin (300 μM), albumin (20 g/liter), heparin (30 kIU/liter), lactose (50 mM), NH4Cl (2.5 mM), LiCl (5 mM), KCl (10 mM), MgSO4 (2 mM), CaCl2 (2.5 mM), ZnSO4 (20 μM), CuSO4 (20 μM), FeCl3 (20 μM), and Al(NO3)3 (20 μM) caused a change in the Na concentration measurement result of not more than 1 mM. Triglycerides did not affect the results. Selectivity of the Na measurement relative to K, Li, and NH4 was evaluated as 1:0.067:0.01:0.01. Results of measurements in 100 serum samples using the proposed method and the flame photometric method were compared (correlation coefficient – 0.985) [8].

In 1993, another method of determining Na in blood serum using the β-galactosidase enzyme was developed, which was meant for use in an automatic flow-through device with a productivity of up to 50 analyses per hour. Serum flow was diluted 1000-fold with tris(hydroxymethyl)aminomethane-buffer (12.1 g/liter; pH 7.2), which included 1,2-bis(2-aminophenoxy) ethane-N,N,N’,N’-tetraacetic acid (125.5 g/liter) and MgSO4·7H2O (1.85 g/liter), and then it was mixed with flows of solutions of DL-dithiothreitol (4 mM) and o-nitrophenyl-β-D-galactopyranoside (4 mM) and tris(hydroxymethyl)aminomethane-buffer. The produced solution was directed to a flow-through reactor containing stationary enzyme β-galactosidase that was heated to 37°C. After the reactor, the flow was mixed with a flow of NaOH (4 M) and introduced into the spectrophotometer. Absorption of the resulting solution was measured at 405 nm. A linear correlation between absorption and Na concentration in the blood was observed in the range from 17 to 1700 mM. The limit of detection was 12 mM. The volume of serum necessary for the analysis was no more than 50 μl. Introduction of DL-dithiothreitol and Mg2+ ions into the reaction mixture prevented any decrease in catalytic activity of the enzyme β-galactosidase. K+, NH +4 , Ca2+ (up to 250 mM) in the blood serum and Li+ ions (up to 20 mM) did not affect the measurements of 170 mM Na. The standard deviation of the relative error of measurement of 136.3 mM Na concentration corresponded to 3.3%. Results of measurements in 30 serum samples using the developed methodology were compared with the results of flame photometric and potentiometric methods (correlation coefficients – 0.9394 and 0.9716, respectively) [9].

Characteristics of the enzymatic method [8] were evaluated by clinical diagnostics specialists.

In 1994, the results of measurements of Na in blood serum samples determined by the method described in [8] were compared with the flame photometric method. A comparison of the obtained data indicated an effect of K+ ions on the measurement results obtained for the determination of Na using the method described in [8], which exceeded the declared value 3.7–7.8 times [10].

In 1994, the standard deviation of relative error was evaluated for the measurements of Na in blood serum using the method in [8], flame photometry, and potentiometry. The results were, respectively, 2.02, 0.7, and 0.68%. From the given data, it follows that the highest (approximately 3-fold) measurement accuracy was achieved by flame photometry and potentiometry [11].

In 1994, the method in [8] was compared to the flame photometric and potentiometric methods of determining Na in blood serum samples also containing proteins (56–102 g/liter), triglycerides (6.2–34 g/liter), hemoglobin (1.19–8.99 g/liter), bilirubin (29–498 μM), and creatinine (0.6–1.7 mM). For the majority of analyzed samples, the discrepancy between the results of measurements using the above methods did not have medical significance. However, the results of the method [8] in samples containing more than 15 g/liter triglycerides were often too low, which contradicts the declared in [8] absence of effect of triglycerides on the results of measurements [12].

From the above, it follows that to determine Na using spectrophotometric methods a relatively small volume of blood plasma or serum is needed (4–250 μl), thus these methods can be used for pediatric analysis. Productivity for these methods can be up to 120 analyses per hour. At the same time, error of measurements (standard deviation of the relative error is 1.5–3.3%) usually exceeds the measurement error for flame photometry and potentiometric techniques. The cost of spectrophotometry equipment and flame photometers is comparable, but is signifi cantly greater than the cost of implementing potentiometric methods. Besides, in terms of the amount and cost of materials used (including optodes and reactors with the stationary enzyme β-galactosidase) spectrophotometric methods are inferior to the flame photometric and potentiometric techniques involving dilution of plasma, serum, and whole blood, and especially potentiometric methods without dilution. These circumstances limit the use of spectrophotometric methods in clinical diagnostics, and at the present time they are only used in those laboratories where the majority of blood components are determined by spectrophotometry.

Titrimetric methods of determining blood Na appeared in the 1950s and are based on the application of ion exchange resins, reactions of formation of sodium-zinc-uranyl acetate and sodium-magnesium-uranyl acetate, as well as on the reaction of formation of sodium methoxyphenylacetate [4, 13].

For example, in 1959 a method was developed for the determination of Na in blood serum with a concentration of 103–175 mM. 2.5 ml of acetone were added to 2 ml of serum. The mixture was centrifuged for 2 min at 2000 rpm (to remove precipitated proteins). 4 ml of methoxyphenylacetic acid were added to 3 ml of the centrifuged liquid. The resulting solution was incubated for about 20 min at –20°C and then centrifuged for 2 min at 2000 rpm at the same temperature. The precipitate was dissolved in hot distilled water, 1–2 drops of phenolphthalein (0.5%) were added, and the solution was titrated with NaOH or KOH to a pink endpoint. Calcium in the serum (up to 20 mM), chloride, and phosphate did not affect the measurement results, and Mg and K resulted in a less than 2% measurement error, sulfate – around 6%. Analysis took about 1.25 h. The discrepancies between the results of measurements using the developed [4] and flame photometric methods reached 11.4%.

Gravimetric methods of determining Na in blood serum have been knows since the 1930s and are based on the use of ion exchange resins resulting in the formation of Na2SO4 and the formation of a precipitate in the form of sodium-zinc-uranyl acetate and sodium-magnesium-uranyl acetate [4, 14].

Due to the long duration and low accuracy of analyses, at the present time titrimetric and gravimetric methods of determination of Na in the blood are not used in clinical diagnostic laboratories.

Conclusions. The greatest activity in the development of methods for the determination of Na in whole blood, plasma, and serum occurred between the 1970s and the 1990s. In the 2000s, it decreased signifi cantly since the main methodological and technological problems were resolved. The developed flame photometry methods, after improvements, became the expert methods. Potentiometric methods have become dominant for routine analyses. Ion chromatography is predominantly used for biomedical studies. Non-enzymatic and enzymatic spectrophotometric methods, as well as titrimetric and gravimetric methods are hardly used in the modern clinical diagnostics. If questions arise regarding the determination of Na concentration in blood plasma, serum, or whole blood, they are related to the organization and function of the clinical diagnostics laboratories and are not related to any needs for improvement of the measurement methods.