Introduction

Acute liver failure (ALF) is a catastrophic disorder characterized by the rapid development of coagulopathy and hepatic encephalopathy after severe acute liver damage [1]. The probability of spontaneous recovery is usually low, with emergency liver transplantation (LT) often being the only effective treatment [1]. Due to the limited availability of liver donors, however, not all patients listed for LT receive a liver graft, and the mortality of patients awaiting LT ranges from 10 to 40% [1, 2].

These high mortality rates of ALF and the limited number of organs available from deceased donors have necessitated the use of emergency adult-to-adult living-donor liver transplantation (LDLT) in many countries [35]. We and others have shown that in patients with ALF, LDLT is associated with acceptable recipient survival and donor morbidity rates [4, 6]. However, direct data that compare the long-term outcomes between deceased-donor liver transplantation (DDLT) and LDLT are limited.

Patient survival rates after emergency LT for ALF are generally lower than those of patients who undergo LT for chronic end-stage liver disease (ESLD), with 1-year survival rates in ALF patients ranging from 60 to 80% [7, 8]. Due to the critical shortage of donor livers, it has been suggested that the allocation of liver grafts to ALF patients should consider the likelihood of post-transplant patient survival, thus avoiding futile LT in patients too ill to benefit from it [9, 10]. To date, none of the previous prognostic models used to assess post-transplant outcomes in ALF patients could reliably define criteria associated with the futility of emergency LT [11, 12].

In this study, we have directly compared the long-term patient and graft outcomes after emergency DDLT and LDLT in ALF patients. We also identified factors predictive of post-transplant patient mortality, and developed a survival prediction model to define when LT is safe or futile in these patients.

Patients and methods

Study subjects

A prospective database of all consecutive adult patients who underwent primary LT for ALF between 2000 and 2009 in our institution was analyzed. ALF was defined based on the criteria of the American Association for the Study of Liver Diseases (AASLD) [1], including the sudden development of severe coagulopathy (international normalized ratio [INR] ≥1.5) and mental alteration with an illness duration of no longer than 26 weeks. Patients with underlying chronic diseases such as chronic hepatitis B and autoimmune hepatitis were included if they had normal liver function before the onset of symptoms and there was no evidence of cirrhosis. Patients younger than 16 years were excluded. This study was approved by the Institutional Review Board of the Asan Medical Center (Approval number: 2010-0506).

Evaluation of patients for LT

Patients were managed in an intensive care unit with standardized protocols [6]. Those developing grade 3 or higher encephalopathy were sedated and ventilated. Norepinephrine was used as the primary vasopressor and hemofiltration was used as renal replacement therapy. An artificial liver support system, i.e., a molecular adsorbent recirculating system (MARS), was applied in one patient before transplantation. All listed patients received broad-spectrum antimicrobial therapy.

Immediately following the diagnosis of ALF, patients without contraindication for LT were listed on the Korean Network for Organ Sharing (KONOS) and were given national priority (status 1) for available deceased-donor livers. At the same time, the need for an emergency LT was explained to each patient’s next of kin, who were also informed in detail of the risks and benefits of DDLT and adult LDLT. Maximum efforts were made to avoid any coercion, and written informed consent was obtained from each living donor candidate according to the guidelines of our institutional ethics committee. The spontaneous willingness of each potential donor was confirmed by social workers, transplantation coordinators, and psychologists if necessary. All donations were approved by the institutional ethics committee and KONOS. Evaluation of a living-donor candidate did not preclude or delay DDLT if a suitable deceased-donor liver became available.

Emergency LT was performed in patients who showed progression of encephalopathy to grade 3 or 4. Brain computed tomography (CT) scan was routinely performed at the time of LT. Evidence of irreversible brain damage, uncontrolled sepsis, severe irreversible cardiopulmonary disease, extrahepatic malignancy, active alcohol or substance abuse, or human immunodeficiency virus infection was considered a contraindication for transplantation.

Evaluation of living-donor candidates

Living-donor candidates were admitted to the emergency room for donor evaluation, and all procedures were performed in an emergency manner as described previously [6]. Briefly, the selection of living donors was based on their medical history, physical examination, laboratory findings, imaging data including abdominal ultrasonography (USN), CT for graft/recipient size matching (three-dimensional CT with volumetric analysis), and routine percutaneous USN-guided liver biopsy. Liver tissue samples were immediately evaluated by a pathologist using frozen sections, and a donor candidate with >30% steatosis was deemed unacceptable. The minimally required graft volume was defined as an estimated graft-recipient weight ratio (GRWR) of ≥0.8. When a single-graft transplant did not appear feasible after the consideration of donor safety (remnant volume <30% of total liver volume) and the possibility of a small-for-size graft for the recipient, a dual-graft transplant was considered as a last resort. In all transplants, donor and recipient ABO blood groups were identical or compatible.

Post-LT management of recipients and donors

The protocols for primary immunosuppression and prophylaxis for hepatitis B recurrence used for recipients of both deceased and living donor organs have been described previously [6]. Liver recipients and donors were followed in the outpatient clinic every 1 month for the first 1 year after discharge, and every 3 months thereafter.

Statistical analyses

Patient and graft survivals were analyzed from the time of transplantation. Patients were followed until death or the end of February 2011. Kaplan–Meier analysis was used to estimate overall post-LT patient and graft survival rates, with groups compared using the log-rank test.

Potential prognostic factors for survival were evaluated at the time of transplant and were analyzed by the Cox proportional hazards (PH) model. These factors included recipient characteristics such as age; gender; etiology of ALF; days from first symptom to encephalopathy; days from diagnosis to transplant; grade of encephalopathy; aspartate aminotransferase; alanine aminotransferase; albumin; total bilirubin; INR; serum sodium; blood urea nitrogen; creatinine; estimated glomerular filtration rate (GFR); white blood cell count; hemoglobin; platelet count; the model for end-stage liver disease (MELD) score; vasopressor requirement; and need for renal replacement therapy. Factors analyzed also included donor and graft characteristics such as age; gender; degree of steatosis; GRWR; and type of transplant (LDLT or DDLT).

GRWR was calculated as graft weight (kg) × 100/recipient weight (kg). MELD was calculated as: 11.2 ln(INR) + 3.78 ln(bilirubin) + 9.57 ln(creatinine) + 6.43 [13]. GFR was estimated according to the modification of diet in renal disease (MDRD) equation as: GFR (mL/min/1.73 m2) = 175 × serum creatinine−1.154 × age−0.203 × 1.212 (if black) × 0.742 (if female) [14]. Because the serum creatinine concentration may be artificially low in patients receiving renal replacement therapy, the concentration was set at 4 mg/dL in patients on renal replacement therapy with serum creatinine concentration below 4 mg/dL.

Univariate Cox PH regression analysis was performed to evaluate the prognostic ability of each variable. To select variables in the final multivariate model, we resorted to the bootstrapping resampling method (1000 repetitions), which can determine the predictive robustness of candidate variables [15]. A 50% relative frequency of selection and clinical relevance were the criteria for the inclusion of variables in the final prognostic model. A prognostic model to predict 1-year post-transplant patient survival was constructed based on a method suggested previously [16]. The discrimination and calibration abilities of the prognostic model were measured by receiver operating characteristic (ROC) techniques and the Hosmer–Lemeshow test, respectively [17, 18]. In addition, a shrinkage estimate (Nagelkerke index, pseudo R 2) was used to quantify overfitting [19].

A P value of <0.05 (two-tailed) was considered statistically significant in all analyses. Statistical analyses were performed using SPSS v13.0 (SPSS, Chicago, IL, USA) and R software version 2.12.0 (http://www.r-project.org). R packages of the design, survival and boot (available at http://cran.r-project.org/web/packages/) were used in this study (last accessed on August 20, 2011).

Results

Characteristics of patients at the time of LT

A total of 160 patients who received LT for ALF were identified and analyzed. Their median age was 40 years [inter-quartile range (IQR) 28–49 years], with 90 (56%) being male (Table 1). The most common causes of ALF were hepatitis B virus infection (n = 47, 29.4%) and use of herbal/folk medicine (n = 47, 29.4%). Only one patient’s ALF was associated with acetaminophen (APAP) overdose. The median waiting time from diagnosis to LT was 4 days (IQR 2–7 days). Vasopressor support and renal replacement therapy were required in 19 (11.9%) and 27 (16.9%) patients, respectively.

Table 1 Characteristics of patients with acute liver failure at the time of liver transplantation

Of the 160 patients, 36 (22.5%) underwent DDLT and 124 (77.5%) underwent LDLT. Most clinical and laboratory characteristics of the patients at the time of LT were comparable in the two groups (Table 1). However, donor age [40 (IQR 31–48) years vs. 28 (IQR 23–35) years, P < 0.01] and GRWR [1.9 (IQR 1.3–2.3) vs. 1.0 (IQR 0.9–1.2), P < 0.01] were significantly higher in the DDLT than in the LDLT group. ABO blood groups were identical or compatible in all cases. The median follow up after LT was significantly shorter in the DDLT than in the LDLT group [25 (range 1–78) months vs. 42 (range 1–132) months, P < 0.01], reflecting the recent increase in DDLTs in our institution. The overall median post-transplant follow-up period of surviving patients was 46 months (range 14–117 months).

Overall patient and graft survivals; DDLT versus LDLT

The DDLT and LDLT groups showed similar patient and graft survival rates after transplantation. The 1- and 3-year patient survival rates were 77.8 and 74.1%, respectively, in the DDLT group and 79.0 and 74.8%, respectively, in the LDLT group (P = 0.99; Fig. 1a), and the 1- and 3-year graft survival rates were 75.0 and 71.4%, respectively, after DDLT and 76.6 and 72.4%, respectively, after LDLT (P = 0.97; Fig. 1b). There were no patient deaths or graft failures more than 3 years after transplantation.

Fig. 1
figure 1

Post-transplant patients and graft survival curves. There was no significant difference in patient survival (a) or graft survival (b) between recipients of adult living-donor liver transplantation (LDLT) and deceased-donor liver transplantation (DDLT)

Data on causes of death were available for 31 of the 34 patients who died within 1 year after the primary transplant. The causes of death included multiorgan failure with or without sepsis in 15 patients, graft rejection in 7, severe pancreatitis in 4, intracranial hypertension in 2, gastrointestinal bleeding in 2, and primary graft non-function in 1. Eight (5%) of the 160 patients received a second transplant within 1 year after the first transplant due to graft failure caused by primary non-function (n = 1); acute (n = 1) and chronic (n = 4) rejection; sepsis (n = 1); and recurrence of hepatitis A (n = 1). Four of these 8 patients died within 6 months after the primary transplantation.

Factors predictive of 1-year post-LT mortality

Of the 40 patient deaths overall, most (34; 85%) occurred within 1 year after LT, regardless of the type of transplantation. Thus, we combined the data on DDLT and LDLT recipients, and analyzed potential predictors, at the time of transplant, of 1-year post-transplant patient mortality in the total population (Table 2).

Table 2 Predictors of 1-year post-transplant patient mortality

Univariate analysis showed that none of the factors indicative of pre-transplant hepatic dysfunction, including albumin (P = 0.30), bilirubin (P = 0.07), or INR (P = 0.94), and etiology of ALF was not also significant factor (P = 0.34). Recipient age (hazard ratio [HR] 1.03, P = 0.02), vasopressor requirement (HR 4.71, P < 0.01), and donor age (HR 1.04, P = 0.03) were significantly associated with increased post-transplant patient mortality, whereas higher GFR (HR 0.98, P < 0.01) and serum sodium (HR 0.95, P < 0.01) were significantly associated with decreased patient mortality. Vasopressor requirement showed the HR of a dichotomous variable, while the other 4 variables that were included in our model showed HRs of per-unit increase. MELD was also a significant predictor of patient mortality (P < 0.01). However, when the 3 individual components of MELD, i.e., total bilirubin, INR, and creatinine were analyzed, only creatinine was significant (P < 0.01). Even creatinine lost significance when both creatinine (P = 0.61) and GFR (P < 0.01) were simultaneously put into the multivariate analysis. Thus, only the individual MELD components were used as variables in this study.

Five variables, including recipient age, GFR, serum sodium, vasopressor requirement, and donor age, were selected as candidates for multivariate analysis, and they showed relative frequencies >50% by a bootstrapping method (Table 2). On multivariate analysis, GFR (HR 0.99, P = 0.03), vasopressor requirement (HR 4.70, P < 0.01), and donor age (HR 1.04, P = 0.02) were significantly associated with post-transplant patient mortality (Table 2). Recipient age (HR 1.03, P = 0.08) and serum sodium (HR 0.96, P = 0.06) showed marginal significance. Overfitting was not likely to be biased in this model because the expected shrinkage factor (0.87) was higher than 0.85.

Prognostic model to predict 1-year post-LT mortality

A prognostic scoring system was constructed using the 5 selected predictors. Despite their marginal significance on multivariate analysis, recipient age and serum sodium were selected as variables because they were clinically relevant and the relative frequency of each was higher than 0.5 by the bootstrapping method.

Each selected variable was categorized and a representative value (W ij ) was defined as the midpoint of each category (Table 3). A representative value showing the best survival rate for each variable was set as a reference value (W iREF). To place weights on each category of variables, the risk points in each variable were stratified as: recipient age [0 (<30 years), 1 (30–44 years), or 3 (≥45 years)], GFR [0 (≥90 mL/min/1.73 m2), 2 (60–89 mL/min/1.73 m2), 4 (30–59 mL/min/1.73 m2), or 5 (<30 mL/min/1.73 m2)], serum sodium [−2 (≥145 mmol/L), 0 (135–144 mmol/L), or 2 (<135 mmol/L)], vasopressor requirement [0 (no) or 6 (yes)], and donor age [0 (<30 years), 2 (30–44 years), or 4 (≥45 years)].

Table 3 Prognostic scoring for 1-year post-transplant mortality

The prognostic model was constructed by summing the risk points for each predictor, with total sums ranging from −2 to 20. The post-transplant 1-year patient mortality rate increased as the prognostic score increased (Fig. 2a). For example, patients with 5, 12, and 16 points had estimated 1-year mortality rates of 11, 48, and 83%, respectively. The c statistic of the model was 0.79 (Fig. 2b). Using 6 points (estimated risk 13.5%) as a cut-off score, the sensitivity and specificity of the prognostic model for 1-year patient mortality were 85 and 60%, respectively. When the prognostic scores were divided into four categories (Fig. 2c), the predicted risks agreed well with the observed risks by showing adequate calibrative ability (Hosmer–Lemeshow test, P = 0.39).

Fig. 2
figure 2

A prognostic scoring system for 1-year post-transplant patient mortality. Post-LT 1-year mortality rate increased as the prognostic score increased from −2 to 20 (a). Receiver operating characteristic (ROC) curve for the prognostic scoring system showed a c statistic of 0.79 (b). When the prognostic scores were divided into four categories, the predicted risks agreed well with the observed risks by showing adequate calibrative ability (c)

Discussion

We have shown here that long-term patient and graft survivals after LDLT were comparable to those after DDLT in patients with ALF. Most deaths occurred within 1 year after transplantation. Older age of the recipient and donor, vasopressor requirement, lower GFR, and hyponatremia at the time of LT were significant predictors of 1-year post-transplant mortality. By adding weight scores for each of these predictors, we constructed a simple and novel prognostic scoring system, with scores ranging from −2 to 20, that estimated 1-year mortality rates ranging from 0 to 100% in patients with ALF undergoing emergency LT.

Although current liver organ allocation is based on a “sickest first” policy [20], determining realistic expectations of patient and graft survival is crucial at times of organ shortage and in deciding the value of organ donation by a living-donor candidate. It is especially important in patients with ALF because their post-transplant outcomes are generally worse than those for patients with other indications [7, 8].

LDLT is in particular demand for ALF patients in regions where ALF is caused primarily by etiologies associated with a high mortality rate and the supply of organs is severely limited [6]. However, the most suitable type of liver graft in patients with ALF has been debated [21]. We and others have reported that outcomes after LDLT for ALF are acceptable, with patient survival rates of 65–80% [3, 4, 6]. However, because most of those studies have been performed in Asian countries, where the supply of organs from deceased donors is severely limited, few DDLT patients were included, making direct comparisons with LDLT difficult. Fortunately, the organ donation rate from deceased donors has sharply increased in our country in recent years [22], allowing the direct comparison between DDLT and LDLT reported here.

In addition to the types of transplants, the quality of the graft may have an important impact on post-transplant outcome [10]. Graft steatosis [12], reduced graft size [23], and ABO-incompatible grafts [12, 24] have been shown to decrease recipient and graft survivals. Higher donor body mass index (BMI), which may be a marker of graft steatosis, was also a strong predictor of early post-transplant mortality [11, 25]. However, no association was observed between these factors and post-transplant outcomes in the present study. The differences in results may be explained by the cautious donor selection criteria used for LDLT as well as DDLT in our center. These results indicate that expedited donor work-up does not increase risks associated with LT in patients with ALF if it is performed with a strictly controlled standard protocol.

Elevated pre-transplant serum creatinine concentration and vasopressor requirement have been consistently identified as the most important predictors of poor outcomes after LT for ALF [11, 12, 26]. These factors are likely to reflect multiorgan failure [27]. The presence of liver insufficiency, the inflammatory reaction triggered by hepatocellular necrosis, and the presence of superimposed infection all may contribute to the pathogenesis of multiorgan failure. Circulatory and renal insufficiency reflect the prominent arterial vasodilatation present in this condition, which may add to the alterations in organ perfusion [27].

Although creatinine is a widely used measure of renal function and is easily determined in general population, it is a relatively inaccurate measure of renal function in patients with liver failure, for several reasons [28]. First, creatinine production in patients with liver failure is reduced secondary to the decreased synthesis of creatine, which is produced in the liver [29]. Creatine is the sole source of creatinine. Second, patients with liver failure often receive a protein-poor diet for the treatment of hepatic encephalopathy. Both of these factors contribute to a falsely low serum creatinine level. Estimated GFR based on mathematical equations (e.g., the MDRD equation) has been shown to better correlate with measured GFR than with the serum creatinine concentration [14, 28]. In the present study, when we included both estimated GFR and creatinine concentration in our multivariate analysis, we found that only estimated GFR remained statistically significant. Therefore, we used estimated GFR rather than creatinine concentration in our model.

Because the renal excretion of sodium is an important determinant of serum sodium, it is thought that serum sodium also reflects an aspect of renal function that is not captured by creatinine or estimated GFR [30, 31]. Hyponatremia per se may affect the survival of patients with ALF by exacerbating cerebral edema. Hyponatremia at the time of transplant has also been suggested to be associated with poor post-transplant outcome in patients with ESLD [3234]. Given the high early mortality rates in emergent LT in ALF patients, we consider that emergency transplantation before the occurrence of hyponatremia or renal dysfunction may improve post-transplant outcomes in patients who show progression of encephalopathy to grade 3 or 4. An important clinical question in this context is whether strict correction of hyponatremia before transplantation, by the use of hypertonic saline or selective V2 receptor antagonists, might improve the post-transplant survival of patients with ALF.

Recipient age showed marginal statistical significance in a multivariate analysis in the present study. However, the inverse correlation between recipient age and LT outcome has been reported repeatedly [11, 12, 25], and thus, we included this variable in the prognostic scoring system. The effect of age on mortality may be due to age-related impaired recovery following the severe physiological insult associated with ALF and transplantation [25]. Donor age has also been reported to be a risk factor for poor patient survival after LT [12, 24].

Two previous studies have tried to comprehensively predict prognosis in patients with ALF undergoing LT. One, from the United States [11], used 4 variables at the time of listing; BMI ≥30 kg/m2, serum creatinine >2.0 mg/dL, age >50 years, and history of life support (mechanical ventilation, vasopressors, intraaortic balloon pump, extracorporeal membrane oxygenation, or prostaglandin E infusion). The second, from the United Kingdom [12], identified 4 parameters at the time of listing: age >45 years, vasopressor requirement, transplantation before 2000, and the use of high-risk grafts (donor age >60 years, non-whole liver graft, ABO non-identical, or a graft with macroscopic steatosis). The variables used in these models were similar to ours: recipient age, donor age, renal function parameters, and vasopressor requirement. However, our study is unique in several aspects. First, our prognostic scoring system could define criteria to determine the futility of LT in ALF. For example, the estimated 1-year post-transplant survival rate in patients with prognostic scores of ≥16 was ≤17%, suggesting that LT should be determined very carefully for these patients. In contrast, a previous model found that even patients with the highest number of points had a 1-year post-transplant survival rate as high as 51.5% [11], suggesting that this model may contribute little to deciding on transplantation in an ALF patient. Second, an important consideration in developing a prognostic model for ALF is the time at which the predictive variables are measured. Due to the rapidly evolving disease process in ALF, a significant proportion of patients who are initially suitable for LT rapidly develop contraindications and become unsuitable for LT [6, 23]. These results suggest that outcomes of LT for ALF may greatly depend on the condition of the patient at the time of transplantation, and we therefore analyzed predictive factors only at the time of transplantation. Third, our risk assessment was subdivided into 3 or 4 categories, rather than into only 2, by weighting continuous variables, thus reducing the loss of information contained in continuous variables. Our model also makes complex statistical evaluations easily available to physicians and patients by matching risk points to expected risks without requiring a calculator or computer.

This study may have some limitations. First, as a retrospective analysis of prospectively collected data, it may have some bias. However, all eligible patients were registered and were included with little missing data. Second, this study included few patients with ALF associated with APAP. However, the etiology of ALF was not associated with post-transplant outcomes [35]. Moreover, studies including patients with APAP-associated ALF identified significant prognostic variables similar to ours [11, 12], indicating that our prognostic model would be applicable to ALF patients regardless of etiology. Last, there may have been some degree of model overfitting due to the limited number of events (34 patient deaths) for a 5-variable model. This may result in the low reproducibility of the results using an independent validation sample. However, a statistical estimation indicated that overfitting was not likely to be biased in this analysis [19].

In conclusion, we have shown that long-term patient and graft survivals after LDLT are comparable to those after DDLT for patients with ALF. Thus, LDLT should be considered an option for patients with ALF who are unlikely to receive DDLT in a timely fashion. A simple prognostic scoring system using 5 predictive variables at the time of transplant, i.e., recipient age, donor age, vasopressor requirement, estimated GFR, and serum sodium concentration may accurately predict 1-year post-transplant survival in ALF patients undergoing LT regardless of the etiology of ALF or the transplantation type. External validation in an independent data set is required to determine how our scoring system is likely to perform in other clinical settings.