Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Shock may be defined broadly as a condition in which metabolic energy production is limited either by the supply or utilization of oxygen, manifest by derangement of the normal oxygen supply–demand balance, accumulation of the byproducts of anaerobic metabolism, and dysfunction of one or more organ system. In the case of massive hemorrhage, shock is due specifically to impaired oxygen delivery (VO2) secondary to both hypovolemia and anemia. Restoration of tissue perfusion, termed resuscitation, proceeds systematically, is based upon the underlying etiology of shock, terminates upon achievement of clearly defined endpoints, and requires frequent re-evaluation. Resuscitation has been refined substantially over the previous decade, with a resultant improvement in the outcomes of critically ill patients (Brun-Buisson et al. 2004; Martin et al. 2003). Major changes have included improved accuracy of the assessment of intravascular volume status, recognition of the detrimental effects of both allogeneic blood product transfusion and excessive volume expansion, and timely, goal-directed treatment of shock and its complications. This chapter will focus on the diagnosis of shock, differentiation into hemorrhagic shock, the benefits and limitations of various measurements used to determine the adequacy of resuscitation, general resuscitative strategies, and complications of resuscitation.

1 Pathophysiology and Etiologies of Shock

Throughout medical history, the notion of shock has been captured using many colorful terms, ranging from “a momentary pause in the act of death” (Warren 1952) to “the rude unhinging of the machinery of life” (Gross 1872). At the cellular level, shock represents inadequate tissue perfusion resulting in compromise of one or more organ systems. Oxygen delivery that is insufficient to meet metabolic demands is believed to be the main defect responsible for shock, although impaired oxygen utilization despite normal or even supranormal VO2 has been observed in septic shock specifically. Although hypotension is an important component of shock, substantial tissue hypoperfusion may exist despite a “normal” (≥60 mm Hg) mean arterial pressure (Wo et al. 1993), so-called occult hypoperfusion, underscoring the importance of multiple measurements of tissue perfusion.

Oxygen delivery is the product of arterial oxygen content and cardiac output. Cardiac output, in turn, is dependent upon intravascular volume, cardiac contractility, and systemic vascular tone. Derangement of these parameters results in three broad categories of shock: hypovolemic, cardiogenic, and vasodilatory, respectively, although less common etiologies of shock are recognized. Hemorrhagic shock is considered traditionally as a sub-classification of hypovolemic shock, as the etiology of hypovolemia is due specifically to blood loss. However, impaired tissue perfusion in hemorrhagic shock is actually due to both hypovolemia (decreased preload) and anemia (decreased arterial oxygen content). This distinction has obvious therapeutic implications in that both volume and oxygen carrying capacity must be restored.

Circulating blood volume (L) is approximately 7 % of body weight (kg): A 70 kg individual thus possesses a blood volume of approximately 4.9 L. A classification schema espoused by the American College of Surgeon’s Advanced Trauma Life Support recognizes four classes of hemorrhage based upon absolute quantities of blood loss (Table 5.1). Although this classification schema is useful for understanding the serial effects of increasing hemorrhage on physiology, it does not account for the marked inter-patient variability in the degree of hemorrhage that is necessary to cause shock. A myriad of co variables, including age, comorbid conditions (particularly cardiopulmonary disease), and baseline hemoglobin concentration, influence this quantity. Thus, a more accurate definition of hemorrhagic shock involves any quantity of acute blood loss that results in evidence of end organ hypoperfusion.

Table 5.1 Advanced trauma life support classification of hemorrhage and resultant physiologic derangements

2 Markers of Tissue Perfusion and Endpoints of Resuscitation

In order to minimize the complications of overzealous volume expansion, shock must be diagnosed prior to embarking on resuscitation. Although the diagnosis of shock is often based upon an overall impression by an experienced clinician, it is helpful to document the presence of shock in terms of objective measurements of tissue perfusion. Concern for shock is often raised initially in the face of organ system dysfunction. Major organ systems and associated signs of hypoperfusion are listed in Table 5.2. The main limitation of measurements of individual organ function is that derangement may be due to intrinsic disease as opposed to hypoperfusion, mandating measurements of global tissue perfusion.

Table 5.2 Signs and symptoms of organ hypoperfusion

Multiple markers of global tissue perfusion exist; they may be grouped broadly into measurements of either the amount of oxygen delivered to the tissue (upstream markers) or the adequacy of tissue DO2 given the level of metabolic demand (downstream markers). Examples of upstream markers include measurements of intra-vascular volume status, cardiac output, and direct calculation of DO2. Examples of downstream markers include measurements of either oxygen extraction or anaerobic metabolism. Whereas upstream markers are useful for gauging which aspect of circulatory system derangement requires therapy (i.e., intravascular volume, cardiac contractility, or vascular tone), downstream markers generally identify a global problem with perfusion, as well as inform the response to resuscitative interventions (i.e., the adequacy of resuscitation).

Measurements of volume status may be dichotomized as either static or dynamic. Static measurements record a point estimate of absolute intravascular volume status which is then extrapolated to assess preload responsiveness; whereas lower values are indicative of hypovolemia, higher values imply volume overload. In general, intravascular pressure (either venous or pulmonary arterial) is used to approximate volume. Central venous pressure (CVP) and pulmonary artery occlusion pressure (PAOP) are the most commonly used static measurements of intravascular volume status.

The principle limitation of static measurements is that both inter and intra-patient variability of Frank-Starling curves renders interpretation of absolute measurements inaccurate. For example, a patient with an elevated CVP may still be operating on the steep portion of the Frank-Starling curve. Conversely, a patient with a low CVP need not remain preload responsive. Beyond this limitation, static measurements of volume responsiveness are influenced by several additional factors, including changes in intra-thoracic pressure, valvular disease, and pulmonary hypertension.

In contrast to static measurements, dynamic measurements of volume responsiveness exploit physiologic variations in pre-load in an attempt to determine the efficacy of volume expansion. A determination of preload responsiveness is thus made irrespective of the underlying absolute volume status. Such “natural” variations in preload made be divided broadly into respiratory variations, such as pulse pressure variation (PPV), systolic pressure variation (SPV), and stroke volume variation (SVV), and positional variations, such as those induced by passive leg raise (PLR). Respiratory variation in arterial pressure involves the increase in preload that occurs during passive mechanical ventilation, followed by a decrease in preload during expiration. These differences in preload translate into differences in cardiac output, whether measured as SPV, PPV, or SVV. Variability in preload, and hence cardiac output, is more pronounced when the heart operates on the steep portion of the Frank-Starling curve, such that an increased variability corresponds to preload responsiveness. The transfer of effective blood volume from the capacitance bed of the lower extremity induced by PLR results in a similar increase in filling pressure, the effect of which is again dependent upon preload responsiveness. Although dynamic measurements of intra-vascular volume are not without their limitations, their superiority of static measurements has been demonstrated convincingly (Michard and Teboul 2002; Marik et al. 2009), and they are preferred whenever possible.

Intravascular volume status represents only one parameter involved in determining tissue perfusion. Measurement of cardiac output is useful when evidence of shock persists despite optimization of volume status. Although cardiac output has been measured traditionally via thermodilution, both echocardiography (Maslow et al. 1996) and pulse contour analysis (de Vaal et al. 2005; McGee et al. 2007) have emerged as non-invasive alternatives with acceptable performance characteristics. Knowledge of the cardiac output, mean arterial pressure, and CVP also allows calculation of the systemic vascular resistance, a parameter that is useful primarily for determining the etiology of shock as opposed to guiding resuscitation.

The most specific upstream measurement of tissue perfusion is DO2. Oxygen delivery is dependent upon cardiac output, arterial hemoglobin concentration and saturation, as well as the partial pressure of oxygen in arterial blood. Importantly, whereas transfusion of allogeneic RBCs likely increases DO2, its effect on VO2 is less clear, with the majority of studies reporting either unchanged or decreased VO2 following transfusion (Kiraly et al. 2009; Tinmouth et al. 2006; Napolitano et al. 2009). Although low DO2 suggests tissue hypoperfusion, the calculation suffers from the same aforementioned limitations of static measurements, and resuscitation guided by driving DO2 to normal or even supernormal levels is both impractical and does not necessarily improve outcomes (discussed below).

As compared to upstream markers, downstream markers measure adequacy of tissue perfusion with respect to oxygen supply-demand balance, making them more useful for diagnosing shock, assessing its severity, and tracking response to therapy. In general, downstream markers capture the magnitude of either tissue oxygen extraction or anaerobic metabolism.

During normal metabolism, VO2 far exceeds consumption, such that DO2 is independent of VO2 over a wide range of values. Either decreased DO2 or increased VO2 is compensated for by increased oxygen extraction from arterial blood, resulting in a decreased oxygen saturation of venous hemoglobin as it exits the tissue bed. Low venous hemoglobin oxygen saturation is thus an early marker of tissue hypoperfusion. Venous hemoglobin oxygen saturation is measured readily using catheter-based fiber-optic reflective spectroscopy in either the pulmonary artery (SvO2) or superior vena cava (ScvO2) with acceptable correlation (Ladakis et al. 2001; Reinhart et al. 2004). Both SvO2 and ScvO2 reflect flow-weighted pooling of venous blood from multiple tissue beds with variable metabolic requirements. A normal value thus does not rule out adequate oxygenation of all organ systems. However, a low SvO2 (<65 %) is highly suggestive of tissue dysoxia (Krafft et al. 1993). The ScvO2 was a useful endpoint of resuscitation in a study of protocolized, goal-directed therapy of septic shock by Rivers et al (Rivers et al. 2001).

Once DO2 falls below a critical level, compensation via increased oxygen extraction is no longer sufficient, an oxygen debt develops, and anaerobic metabolism ensures. Below this critical level, VO2 is dependent upon DO2, and an increase in DO2 will result in corresponding increase in VO2.

The three byproducts of mammalian anaerobic metabolism are lactate, pyruvate, and hydrogen ion. The arterial lactate concentration is thus proportional directly to oxygen debt. Both elevation and failure of normalization of the serum lactate concentration are highly predictive of adverse outcomes among critically ill patients in shock (Alonso et al. 1973; Moomey et al. 1999; Suistomaa et al. 2000; Abramson et al. 1993). However, lactate is a non-specific marker of global tissue hypoxia. This is because several other conditions, including muscle hyper activity, seizure, accelerated aerobic gylcolysis, and liver disease, may result in elevation of the lactate concentration, regardless of the presence of shock. Moreover, lactate clearance lags several hours behind improvement in tissue oxygenation, rendering the serum lactate concentration problematic for real-time resuscitative efforts.

Hydrogen ion concentration may be estimated by the serum bicarbonate concentration, arterial pH, or base deficit. The base deficit is defined as the amount of base (millimoles/L) required to titrate whole blood to a normal pH at normal values of temperature, PaCO2, and PaO2. Elevations of the base deficit beyond the normal range of −3 to 3 correlate with the presence and severity of shock (Davis et al. 1988; Rutherford et al. 1992). The base deficit normalizes rapidly following restoration of tissue perfusion, making it an ideal marker for resuscitation. Measurement of the serum bicarbonate concentration may be used as surrogate for the base deficit with reasonable correlation (Eachempati et al. 2003; Martin et al. 2005) and does not require an arterial sample. Importantly, all measurements of serum acid are rendered inaccurate in the setting of exogenous bicarbonate administration. Furthermore, changes in minute ventilation must be appreciated as variability in PaCO2 contributes to the serum acid concentration.

Beyond tissue hypoperfusion, several additional cause of acidosis may co-exist in the critically ill patient, such as alcohol intoxication and hyperchloremia. Specifically, large-volume saline resuscitation results in a non-anion gap, hyperchloremic, metabolic acidosis and hence a persistently elevated base deficit despite normalization of perfusion (Kellum et al. 1998; O’Dell et al. 2007). Partitioning of the base deficit addresses this last issue (Fidkowski and Helstrom 2009).

3 Resuscitation Strategies

Several resuscitation strategies for hemorrhagic shock specifically are discussed below. However, we begin with general points regarding the conduct of resuscitation, regardless of etiology. Initial management of the patient in shock involves a rapid assessment of the relative contributions of the various aforementioned etiologies. Assessment typically proceeds in the following manner: (1) intravascular volume status (including hemoglobin concentration), (2) cardiac contractility, (3) vascular tone. In reality, multiple diagnostic maneuvers are often undertaken simultaneously. Correction of pathology occurs in a similar fashion, optimizing pre-load, followed by cardiac contractility, and finally vascular tone. This sequence is based upon the fundamental principles of cardiodynamics; efforts to improve either contractility or vascular tone will be ineffective in the face of hypovolemia. Furthermore, vasoactive medications offer little benefit if cardiac output is insufficient to perfuse end organs, and may even impede marginal cardiodynamics by increasing afterload. Resuscitation is a dynamic process that mandates frequent re-evaluation. Additional etiologies of shock frequently compound the initial insult. Resuscitation must also be monitored carefully to avoid the complications of overzealous volume expansion.

3.1 Timing of Resuscitation

It is intuitive that timely restoration of tissue perfusion will lead to improved outcomes. Animal models of hemorrhagic shock have demonstrated that tissue damage becomes irreversible beyond a critical period of hypoperfusion, after which time restoration of perfusion is superfluous (Shires et al. 1964). These observations have led to the concept of the resuscitation window, beyond which efforts to normalize perfusion are met with diminishing returns. Numerous data now support the existence of the resuscitation window. Kern et al. conducted a meta-analysis of 21 randomized trials comparing goal-directed therapy to conventional management of patients in shock (Kern and Shoemaker 2002). The majority of studies involved optimization of PAOP, cardiac output, and DO2 to either normal or supranormal levels. A benefit to such therapy was observed only among those studies that maximized oxygen delivery either before or early after the onset of organ dysfunction. Early efforts to optimize tissue perfusion have been shown to be of benefit even in the absence of goal achievement, suggesting that the timing of resuscitative efforts, rather than attainment of an arbitrary goal, contributes to outcome benefit (Lobo et al. 2000).

How long, then, does such a resuscitation window remain open? Gattanoni et al. demonstrated no survival benefit to hemodynamic optimization attempted 12–36 h after the onset of organ failure (Gattinoni et al. 1995). By contrast, Rivers et al. documented significant mortality improvement with protocol-driven resuscitation within 6 h of presentation to the emergency ward (Rivers et al. 2001). Thus, resuscitation should begin as early as possible; hypoperfusion intervals of greater than 6–12 h likely cannot be reversed completely with subsequent hemodynamic optimization.

3.2 Permissive Hypotension

In the case of hemorrhagic shock, restoration of tissue perfusion must be balanced against exacerbation of ongoing hemorrhage by increasing blood pressure. This dilemma has given rise to the debate regarding permissive hypotension, which involves deliberate tolerance of lower mean arterial pressures in the face of hemorrhagic shock in order to minimize further bleeding. This strategy is based on the notion that decreasing perfusion pressure will maximize success of the body’s natural mechanisms for hemostasis, such as arteriolar vasoconstriction, increased blood viscosity, and in situ thrombus formation. Animal models of uncontrolled hemorrhage have revealed that crystalloid resuscitation to either replace three times the lost blood volume (Bickell et al. 1991) or maintain 100 % of pre-injury cardiac output (Owens et al. 1995) exacerbates bleeding (Bickell et al. 1991; Owens et al. 1995) and increases mortality (Bickell et al. 1991) as compared to more limited fluid resuscitation.

Randomized trials that compare fluid management strategies prior to control of hemorrhage among human subjects are limited. In the first large scale trial, Bickell et al. randomized 598 patients in hemorrhagic shock (systolic blood pressure <90 mm Hg) who had sustained penetrating torso trauma to either crystalloid resuscitation or no resuscitation prior to operative intervention (Bickell et al. 1994). Pre-specified hemodynamic targets were not used. Mean systolic arterial blood pressure was significantly decreased upon arrival to the emergency department for the delayed resuscitation group as compared to the immediate resuscitation group (72 mm Hg vs. 79 mm Hg, respectively, p = 0.02) with a corresponding increase in survival (70 vs. 62 %, respectively, p = 0.04). A trend towards a decreased incidence of postoperative complications was also observed for the delayed resuscitation group.

Two more recent trials have failed to replicate these findings. Turner et al. randomized 1,306 trauma patients with highly diverse injury patterns and levels of stability to receive early vs. delayed or no fluid resuscitation (Turner et al. 2000). Although no mortality difference was observed (10.4 % for the immediate resuscitation group vs. 9.8 % for the delayed/no resuscitation group), protocol compliance was poor (31 % for the early group and 80 % for the delayed/no resuscitation group), limiting interpretability. Most recently, Dutton et al. randomized 110 trauma patients presenting in hemorrhagic shock (systolic blood pressure <90 mm Hg) to receive crystalloid resuscitation to a systolic blood pressure of >70 mm Hg vs. >100 mm Hg (Dutton et al. 2002). Randomization occurred following presentation to the emergency department. Not all patients required operation, and hemorrhage control was determined at the discretion of the trauma surgeon or anesthesiologist. Although there was a significant difference in mean blood pressure during bleeding between the conventional and low groups (114 mm Hg vs. 110 mm Hg, respectively, p < 0.01), the mean blood pressure was substantially higher than intended (<70 mm Hg) for the low group, and the absolute difference between group was likely insignificant clinically. Mortality was infrequent and did not vary by resuscitation arm (7.3 % for each group).

Methodological variability between these trials has precluded a meaningful meta-analysis (Kwan et al. 2003), and may help to explain the discrepant mortality findings. It is clear that the degree of hemorrhagic shock was most pronounced in the study of Bickell et al., as evidenced by the lowest presenting systolic blood pressure as well as the highest mortality. Furthermore, randomization was accomplished in the pre-hospital setting, and all patients required operative intervention. By contrast, mortality was infrequent in the study of Dutton et al., and the target systolic blood pressure of 70 mm Hg in the “low” group was, on average, not achieved. Thus, at present, it is possible to conclude that limited volume resuscitation prior to operative intervention may be of benefit among patients with penetrating trauma in hemorrhagic shock, although the optimum level of permissive hypotension remains unknown. The benefit of such therapy among a more diverse cohort of patient in hemorrhagic shock, with a low associated risk of death, is not clear. Finally, regardless of therapeutic benefit, reliable achievement of permissive hypotension appears challenging once hospital care has begun.

3.3 Resuscitation to Supra-Normal Physiology

Once definitive hemorrhage control has been obtained, is there benefit to pushing tissue perfusion to a “supra-normal” level? At the cellular level, shock is characterized by oxygen demand that exceeds supply, such that VO2 is DO2-dependent. Replacement of the oxygen debt is thus signaled by an increase in DO2 that does not result in a corresponding increase in VO2. Shoemaker et al. observed that survivors of shock demonstrate increases in DO2 to “supranormal” levels (≥600 mL/min) as compared to non-survivors (Bland et al. 1985; Velmahos et al. 2000), suggesting that endogenous eradication of the oxygen debt via enhanced physiology is advantageous. This observation led to the hypothesis that resuscitation to supranormal physiology would result in improved outcomes. Thresholds of supranormal physiology have included cardiac index >4.5 L/min/m2, DO2 index >600 mL/min/m2, and VO2 index of >170 mL/min/m2. Unfortunately, results of randomized trials comparing supranormal to conventional resuscitation among critically ill patients have been in large part disappointing (Gattinoni et al. 1995; Velmahos et al. 2000; McKinley et al. 2002; Sandham et al. 2003; Heyland et al. 1996; Richard et al. 2003; Rhodes et al. 2002).

Several possibilities may explain the lack of benefit observed in these trials. As mentioned previously, the timing of goal-attainment is of importance; resuscitation to supranormal physiology following the onset of organ failure does not appear to impart a survival advantage (Kern and Shoemaker 2002). However, increasing DO2 to ≥600 mL/min prior to the onset of critical illness did not result in a mortality benefit in a randomized trial of nearly 2,000 high-risk surgical patients (Sandham et al. 2003), questioning the findings of the meta-analysis by Kern and Shoemaker. Furthermore, both shock and resultant organ failure are seldom predictable events, thus precluding pre-emptive resuscitation to supranormal hemodynamics.

A second finding in the meta-analysis by Kern and Shoemaker was that the benefits of supranormal resuscitation were observed only if the stated goals were in fact obtained. Although this last point appears intuitive, it is important to note that resuscitation to supranormal physiology is often times impossible. In a recent cohort study of hemorrhagic shock, only 70 % of patients were able to achieve supranormal physiologic values of cardiac index >4.5 L/min/m2, DO2 index >600 mL/min/m2, and VO2 >170 mL/min/m2 (Velmahos et al. 2000). Similarly, pre-specified goals were attained on average in only 7 of 21 (33.3 %) studies included in the meta-analysis by Kern and Shoemaker (Kern and Shoemaker 2002). Beyond impracticality, measurement of the parameters necessary to calculate DO2 and VO2 require a functional pulmonary artery catheter. Both misinterpretation of data (Iberti et al. 1990) and associated morbidity of this device (Ivanov et al. 2000) must be taken into account. Although newer arterial catheter-based calculations of cardiac output have become common (McGee et al. 2007), this modality has not been validated within the context of a goal-directed resuscitation protocol. Perhaps most concerning is the fact that resuscitation to supranormal physiology results invariably in increased volume expansion, thereby increasing the risks of pulmonary edema, intestinal ischemia, and ACS (McKinley et al. 2002; Balogh et al. 2003). In light of these data, resuscitation of shock based on attainment of supra-normal physiology cannot be advocated currently.

3.4 Red Blood Cell Transfusion

Although the risks and benefits of allogeneic RBC transfusion are discussed elsewhere in this book, we offer some salient points with respect to resuscitation of hemorrhagic shock. Because hemorrhagic shock involves impaired tissue perfusion due to both hypovolemia and anemia, it is intuitive that restoration of the hemoglobin concentration via RBC transfusion both reverses shock and improves outcomes. However, the optimal target hemoglobin concentration during resuscitation remains unknown.

During resuscitation, a balance must occur between the competing goals of maximal oxygen content (hematocrit = 100 %) and minimal blood viscosity (hematocrit = 0 %). Furthermore, irrespective of hematocrit, the oxygen carrying capacity of transfused allogeneic erythrocytes is impaired due to storage-induced changes in both deformability and hemoglobin oxygen affinity. Accordingly, although many studies have measured an increase in DO2 following transfusion of allogeneic RBCs, almost none have reported an increase in VO2 (Napolitano et al. 2009). Finally, beyond a role in DO2, erythrocytes are integral to hemostasis via their involvement in platelet adhesion and activation, as well as thrombin generation. The hematocrit is thus relevant to hemorrhagic shock as it relates to both oxygen availability and hemostatic integrity.

Early canine models of hemorrhagic shock suggested that VO2 is optimized at a relatively high hematocrit (range 35–42 %) (Crowell et al. 1959). However, hematocrit variation was achieved via auto-transfusion of the animal’s shed whole blood, eliminating the aforementioned limitations of allogeneic erythrocytes, and rendering the results inapplicable to modern resuscitation of hemorrhagic shock. Furthermore, acute normovolemic hemodilution of dogs to a hematocrit of 10 % is well tolerated, with little decrement in oxygen delivery secondary to a compensatory increase in cardiac output (Takaori and Safar 1966).

Retrospective observations among critically ill surgical patients in the 1970s suggested a hematocrit of 30 % as optimal for both oxygen carrying capacity and survival (Czer and Shoemaker 1978). Such studies formed the basis of the traditional recommendation to maintain the hematocrit >30 %, although the marked limitations of this retrospective literature were recognized ultimately. As the deleterious effects of RBC transfusion became increasingly evident, renewed interest in the ideal transfusion trigger occurred. The Transfusion Requirements in Critical Care (TRICC) Trial, which compared restrictive (hemoglobin <7.0 g/dL) and liberal (hemoglobin <9.0 g/dL) transfusion triggers among 838 patients, provided the first level I evidence regarding RBC transfusion strategies among the critically ill (Hebert et al. 1999). Although inclusion criteria did not specify ongoing resuscitation, 37 % of patients were in shock at the time of enrollment as evidenced by the need for vasoactive drugs. No difference in 30-day mortality was observed between groups. However, in hospital mortality, as well as mortality among less severely ill patients (Acute Physiology and Chronic Health Evaluation II Score <20) and younger patients (age < 55 years) was significantly lower in the restrictive transfusion group. Current evidence thus suggests that a hemoglobin concentration of >7 g/dL is at least as well tolerated as a hemoglobin concentration of >9 g/dL among critically ill patients, although extrapolation of these data is necessary to extend this recommendation to patients in hemorrhagic shock.

It is possible that hemoglobin concentrations below 7 g/dL are safe, particularly in younger patients. However, a hemoglobin concentration of 5 g/dL appears to be the threshold for critical anemia. Whereas hemodilution of healthy volunteers as low as a hemoglobin concentration of 5 g/dL is well tolerated (Weiskopf et al. 1998), a study of postoperative patients who refused RBC transfusion reported a sharp increase in mortality below this same hemoglobin concentration (Carson et al. 2002). Such populations differ fundamentally from the multiply-injured, exsanguinating patient in need of resuscitation. However, these data are provocative, and future large scale trials of lower transfusion triggers for the resuscitation of hemorrhagic shock are warranted in light of the accumulating evidence documenting the untoward effects of RBC transfusion.

In addition to oxygen transport, RBCs play an important role in hemostasis. As the hematocrit rises, platelets are displaced laterally towards the vessel wall, placing them in contact with the injured endothelium; this phenomenon is referred to as margination. Platelet adhesion via margination appears optimal at a hematocrit of 40 % (Goldsmith 1972). Erythrocytes are also involved in the biochemical and functional responsiveness of activated platelets. Specifically, RBCs increase platelet recruitment, production of thromboxane B2, and release of both ADP and P-thromboglobulin. Furthermore, RBCs participate in thrombin generation through exposure of procoagulant phospholipids. Interestingly, animal models suggest that a decrease of the platelet count of 50,000 is compensated for by a 10 % increase in hematocrit (Quaknine-Orlando et al. 1999). Despite these experimental observations, no prospective data exist detailing the relationship between hematocrit, coagulopathy, and survival among critically injured trauma patients.

The myriad risks of RBC transfusion must be kept in mind during resuscitation of patients in hemorrhagic shock. The immunomodulatory properties of RBC transfusion were first noted as a correlation between transfusion and graft survival following solid organ transplantation (Opelz and Terasaki 1978). The observation that tumor recurrence was associated with RBC transfusion soon followed (Gantt 1981). It is now appreciated that RBC transfusion both impairs humoral immunity and causes elaboration of pro-inflammatory cytokines (Shanwell et al. 1997). These phenomena are both transfusion dose and age dependent. Moreover, transfused blood exerts a number of negative effects upon cardiodynamics, including increased pulmonary vascular resistance, depletion of endogenous nitric oxide stores, and both regional and systemic vasoconstriction (Fernandes et al. 2001).

In summary, prior investigations into the ideal hematocrit for oxygen carrying capacity during hemorrhagic shock are in large part irrelevant to modern day resuscitation with allogeneic blood. Banked erythrocytes are subject to a time dependent diminution of oxygen carrying capacity, and the effect of blood transfusion on oxygen consumption, regardless of hematocrit, remains questionable. The CRIT trial suggested that patients in shock tolerate a hemoglobin concentration of 7.0 g/dL at least as well as 9.0 g/dL, although this hypothesis was not tested during the initial resuscitation of hemorrhagic shock specifically. Furthermore, the role of erythrocytes in hemostasis must be considered. In practice, clinical circumstance (e.g., ongoing hemorrhage with hemodynamic instability and coagulopathy), as opposed to an isolated laboratory measurement, should inform the decision to transfuse. However, until there is definitive evidence to challenge the CRIT data, a hemoglobin concentration of <7 g/dL should be considered the default transfusion trigger for resuscitation from shock.

One final point regarding the use of blood products for the resuscitation of patients in hemorrhagic shock involves the relative quantities of RBCs, clotting factors, and platelets to administer. It is now recognized that resuscitation of patients who require massive transfusion (>10 U RBC within 24 h) using RBCs alone results in coagulopathy due to both consumption and dilution of coagulation factors as well as platelets. Both acidosis and hypothermia exacerbate this coagulopathy. Our group and others noted that mortality among massively transfused patients was reduced when increased amounts of both plasma and platelets were administered empirically. Specifically, when introducing the concept of RBC:FFP ratios, we reported increased mortality among a cohort of patients with major vascular trauma associated with RBC:FFP ratios greater than 5:1, with overt coagulopathy observed nearly universally with ratios exceeding 8:1 (Kashuk et al. 1982). In 2007, Borgmen et al. published a series of 254 massively transfused US soldiers in Iraq and Afganistan, reporting markedly improved survival among those transfused with a RBC:FFP ratio in the range of 1.5:1, as compared to higher ratios.(Borgman et al. 2007) This ratio appeared appealing intuitively as it most closely resembled that of whole blood, although a 1:1 formulation is actually both anemic (hematocrit 27 %) and clotting factor deficient (65 % activity) as compared to fresh whole blood (Armand and Hess 2003). Several subsequent studies, in both the military and civilian setting, have corroborated the findings of Borgeman et al. (Duchesne et al. 2008; Holcomb et al. 2008; Teixeira et al. 2009). An association between early, aggressive FFP administration and improved survival has also been documented among trauma patients who underwent sub-MT (Spinella et al. 2008). These data have given rise to the concept of damage control resuscitation, which involves early transfusion of increased amounts of both clotting factors and platelets, in addition to minimization of crystalloid resuscitation in patients who are expected to require MT. Currently, many trauma centers advocate pre-emptive transfusion of RBC:FFP using a target ratio of 1:1 for such patients.

Unfortunately, the literature addressing component transfusion ratios during MT suffers from several substantial methodological limitations. Despite a myriad of retrospective data, mathematical models (Hirshberg et al. 2003), and expert opinion, there remains no prospective evidence to support an empiric transfusion ratio. A major limitation of the retrospective literature involves survival bias. Specifically, it remains unclear if increased FFP transfusion improves survival or if patients who survive simply live long enough to receive more FFP. Indeed, patients who are bleeding faster get less plasma as the trauma team and blood bank struggle to keep up. Related intimately to the issue of survival bias is that of the time period over which the RBC:FFP ratio is calculated. Although over 80 % of RBC transfusions are administered within 6 h of injury, most studies have reported the cumulative RBC:FFP ratio as calculated at 24 h. Such a strategy exacerbates survival bias, as the RBC:FFP ratio is known to decrease over time. Accounting for the time-dependent nature of the RBC:FFP transfusion ratio eliminated any association with survival in one recent report (Snyder et al. 2009). Furthermore, when the cumulative RBC:FFP ratio was analyzed at 6 h as opposed to 24, our group identified a ratio in the range of 2:1–3:1, as opposed to 1:1, as associated with the lowest predicted mortality (Kashuk et al. 2008).

The next major limitation involves the lack of a mechanistic link between a lower RBC:FFP ratio and improved survival. The clinical efficacy of FFP remains largely unproven (O’Shaughnessy and Atterbury 2004), and no study has documented an association between a lower RBC:FFP ratio and fewer total blood products administered. Moreover, differences in laboratory markers of coagulopathy (e.g., PT, TEG) have not been demonstrated between groups of varying RBC:FFP ratios. In fact, a canine model showed no benefit to adding FFP following MT in terms of changes in coagulation protein levels or clotting times (Martin et al. 1985). Finally, the benefit of a 1:1 RBC:FFP ratio has also not been consistent across various mechanisms of injury (Mace et al. 2009).

Although many experts advocate a RBC:FFP transfusion ratio of 1:1 during MT, the lowest ratio achieved in most studies approaches 1.5:1. While moving from an RBC:FFP ratio of 2:1–1:1 may appear trivial, such a paradigm shift represents a 100 % increase in FFP utilization. An increase of this magnitude would place tremendous strain upon the marginal FFP donor pool, as well as increase exponentially blood bank labor, likely to the point of non-sustainability in the event of a mass casualty. Finally, unbridled FFP administrated must be viewed with caution in light of the accumulating evidence detailing the immunomodulatory properties of such therapy.

In summary, the literature involving empiric component therapy suffers from several methodological limitations. Currently, the optimal empiric RBC:FFP ratio for resuscitation of patients who require MT appears to be in the range of 1:2–s1:3. However, whenever possible, component replacement should be both individualized and goal-directed, such that overzealous clotting factor and platelet replacement and the complications thereof are minimized.

4 Complications of Resuscitation

Resuscitation beyond restoration of adequate tissue perfusion is both ineffective and potentially harmful. Although aggressive volume expansion may be life-saving for the immediate resuscitation of severe hypotension, subsequent fluid administration should occur only when evidence of preload responsiveness exists (Vincent and Weil 2006). Excessive volume expansion is particularly detrimental in the setting of critical illness as inflammatory-mediated increased capillary permeability drives fluid from the intravascular space into the interstitium, compounding tissue edema with little to no improvement in hemodynamics. The effects of tissue edema on specific organ systems are summarized in Table 5.3.

Table 5.3 Effects of tissue edema on specific organ systems

Intra-abdominal hypertension leading to abdominal compartment syndrome (ACS) is a particularly devastating complication of volume resuscitation with a high associated morbidity and mortality (Cheatham et al. 2007). Once believed to be a complication specific to trauma patients, both intra-abdominal hypertension and ACS have now been reported over a wide variety of patient cohorts, ranging from organ transplantation (Biancofiore et al. 2003) to pediatrics (Beck et al. 2001; Ball et al. 2008). The pathophysiology of ACS involves a progressive increase in abdominal pressure due to any combination of diminished abdominal wall compliance, increased intra-luminal intestinal contents, increased intra-peritoneal fluid, and increased tissue edema. Importantly, the abdomen need not be the initial location of the pathology (Madigan et al. 2008); the development of ACS in this instance is termed secondary ACS. Increases in abdominal pressure eventually become sufficient to impede venous return from both abdominal viscera (resulting in intestinal ischemia) and the inferior vena cava (causing decreased filling pressures and obstructive shock). Both impedance of urinary drainage and respiratory embarrassment secondary to elevated airway pressures are also characteristic. Several risk factors for ACS are recognized; both large volume fluid resuscitation and attempts to resuscitate to supranormal physiology have been implicated consistently (Balogh et al. 2003; Madigan et al. 2008; Malbrain et al. 2005).

Physical exam findings, such as elevated airway pressures, oliguria, and tube feeding intolerance, may aid in the diagnosis of abdominal hypertension, but are in and of themselves insensitive (Kirkpatrick et al. 2000; Greenhalgh and Warden 1994), mandating measurement of intra-abdominal pressure. Several techniques have been described, including transduction of intra-gastric, intravascular, and intraperitoneal pressure. Measurement of the intravascular pressure is the current reference standard with several noteworthy technical considerations. Pressure is expressed as mm Hg. Normal intra-abdominal pressure is <7 mmHg, increases >12 mmHg constitute abdominal hypertension, and a sustained pressure ≥20 mm Hg in the presence of organ failure is diagnostic of ACS. Disease severity may also be expressed as the abdominal perfusion pressure, defined as the mean arterial pressure minus the intra-abdominal pressure. An abdominal perfusion pressure <50–60 mmHg is associated with poor outcomes among patients with intra-abdominal hypertension (Cheatham et al. 2000).

Medical therapy aimed at reducing abdominal pressure may be attempted for the hemodynamically stable patient in the absence of worsening organ failure. Paralysis, intestinal decompression, and diuresis are all effective means to decrease abdominal pressure. However, sustained or worsening intra-abdominal hypertension after a brief trial of non-operative maneuvers mandates surgical decompression, as delay in definitive decompression worsens outcomes substantially (Cheatham et al. 2000). Percutaenous catheter decompression may be considered when elevated abdominal pressure is secondary to intra-peritoneal fluid (e.g., ascites). Small case series suggest that this technique may be particularly useful among burn patients (Latenser et al. 2002; Corcos and Sherman 2001; Parra et al. 2006). However, beyond this specific circumstance, surgical decompression via laparotomy remains the definitive treatment for ACS. Failure of improvement following surgical decompression should raise concern for either inadequate decompression or misdiagnosis. When timely and effective surgical decompression is achieved, the abdominal is usually amenable to closure within 7 days (Burlew et al. 2012).

Beyond ACS, aggressive volume expansion results in a variety of additional untoward consequences. Positive fluid balance is associated with worsened pulmonary dynamics among patients with acute lung injury (Wiedemann et al. 2006; Sakr et al. 2005) as well as increased post-operative complications following gastrointestinal surgery (Brandstrup et al. 2003; Kudsk 2003; Nisanevich et al. 2005). Importantly, intentional fluid restriction during critical illness does not appear to increase the incidence of renal injury given that it is monitored vigilantly (Wiedemann et al. 2006). Although the ideal fluid management strategy both during and following resuscitation remains controversial (Bagshaw and Bellomo 2007; Pruitt 2000; Durairaj and Schmidt 2008), recent evidence suggests that restrictive fluid strategies are well tolerated, and may even be beneficial. The use of dynamic measurements of fluid responsiveness (Marik et al. 2009), restrictive transfusion triggers (Hebert et al. 1999), and avoidance of routine supra-physiologic resuscitation (Balogh et al. 2003) represent evidence-based strategies to minimize unnecessary and potentially harmful volume expansion.

5 Summary

Effective resuscitation of patients in hemorrhagic shock entails balancing timely restoration of tissue perfusion with parsimonious subsequent volume expansion in order to avoid the detrimental effects of fluid overload. Although definitive evidence documenting a benefit to permissive hypotension is lacking, periods of hypertension should be avoided in the bleeding patient, and definitive hemorrhage control should be obtained immediately. When diagnosing shock and monitoring resuscitative efforts, dynamic measurements of fluid responsiveness are preferred, and endpoints of resuscitation should incorporate downstream markers that normalize rapidly following restoration of tissue perfusion (e.g., SvO2, base deficit). The target hemoglobin concentration during resuscitation of patients in hemorrhagic shock remains unknown, although it is likely somewhere between 7 and 10 mg/dL. Rather, hemodynamic instability in the face of ongoing hemorrhage should prompt additional transfusion. Current evidence suggests replacing RBCs and FFP in a ratio of 2:1. Resuscitation of patients in hemorrhagic shock invariably involves massive volume expansion; knowledge of potential complications, particularly ACS, is imperative for any clinician caring for these patients.