Introduction

Oral anticoagulation with warfarin is a well-established therapy for ischemic stroke prevention in patients with atrial fibrillation (AF) [1]. Novel, nonvitamin K antagonist oral anticoagulants (OACs), among them direct thrombin and factor Xa inhibitors, have demonstrated similar or superior efficacy to warfarin without the need for frequent laboratory monitoring or dose adjustment [2, 3, 4•]. As the prevalence of AF rises in a growing elderly population [5, 6] these agents are becoming central to the routine practice of clinicians caring for these patients.

The rate of reported bleeding incidents in clinical practice with the first of the FDA-approved novel OACs has been unexpectedly high [7], drawing renewed attention to the serious risks of all OACs, including warfarin. Many of the clinical risk factors for stroke in patients with AF, such as increased age, are also risk factors for major bleeding [8, 9]. Consequently, though the benefits are clear, the decision to treat the elderly patient with AF with long-term OACs is often a dilemma for the clinician mindful of the risk of major bleeding.

Several bleeding risk prediction models have been created to help the clinician identify patients for whom the risk of bleeding is high, and would potentially outweigh the benefits of OAC therapy. In this review, we discuss the features of 8 bleeding risk prediction models and an approach to assessing bleeding risk in clinical practice.

Bleeding Risk Prediction Models

Outpatient Bleeding Risk Index (1989, 1998)

The Outpatient Bleeding Risk Index (OBRI) was derived from a single-center cohort of 565 patients who started outpatient warfarin therapy for venous thromboembolism (VTE), stroke, atrial fibrillation (AF), or valvular heart surgery upon discharge between 1977 and 1983 [10]. Major bleeding was defined as fatal or potentially life-threatening. Life-threatening bleeding was based on factors such as the amount, rate, and location of bleeding, in addition to interventions required to stop the bleeding. In a derivation cohort of two-thirds of the patients, univariate and multivariate analysis of 150 categorical and 40 continuous variables were performed to derive 5 independent predictors of major bleeding: age ≥65 years, history of stroke, history of gastrointestinal bleeding, AF, and a serious co-morbid condition (renal insufficiency, recent myocardial infarction, or severe anemia). One or 2 points were assigned to each risk factor, and patients were divided into low, medium, and high-risk categories (Table 1). In a validation cohort of the remaining one-third of patients, the model stratified patients into risk categories with statistically significant differences in the cumulative rates of major bleeding. Low risk individuals had a 3 % annual bleeding risk whereas high risk patients had as high as a 30 % annual bleeding risk.

Table 1 Summary of bleeding risk prediction models

A modified OBRI was prospectively validated by Beyth et al. in a cohort of 264 outpatients starting warfarin at a single-center between 1986 and 1987 [11]. The modified index included diabetes mellitus into the co-morbid conditions and removed AF from the predictors of bleeding. In this cohort, the modified OBRI predicted major bleeding with a c-index of 0.78. The authors also assessed the ability of the patients’ physicians to predict major bleeding and found that they performed no better than chance.

With 5 easily identifiable clinical variables, the modified OBRI performed with a c-index comparable with many widely used risk prediction models. It also evaluated a real world population of patients taking a single agent, warfarin, managed by the primary physician of the patient. One major limitation of the model was the cohorts of patients used to derive the index, since it included patients with a variety of indications for warfarin therapy. In the original OBRI study, the majority of patients (57 %) were started on warfarin after cardiac surgery. AF was the indication for therapy in only 15 % of patients. Furthermore, the bleeding rate in this cohort was very high, with a cumulative incidence of major bleeding between 11 % and 22 %, and any bleeding between 20 % and 41 %, at 12 and 48 months, respectively. In their validation cohort study, Beyth et al. found that the overall incidence of major bleeding was lower at 8 % and 12 % at 12 and 48 months, respectively, which is more consistent with rates found in other contemporaneous studies of outpatient warfarin therapy [12, 13], but still higher than that found in a more recent study of the modified OBRI [14].

Kuijer et al. (1999)

The bleeding risk prediction model described by Kuijer et al. has the advantage of requiring only 3 easily identifiable clinical variables [15]. The bleeding risk prediction model was derived using the data set of 1021 patients in a clinical trial of low molecular weight heparin vs unfractionated heparin in the initial management of VTE, followed by a 3-month follow-up period on an OAC, either warfarin or coumarin [16]. Both treatment groups had equivalent rates of major bleeding.

Clinical variables correlated with bleeding risk were derived from a literature review performed by the authors to determine a bleeding risk model with optimal cutoff points for stratification of low, moderate, and high risk groups. The derived bleeding risk score is (1.6 × age) + (1.3 × female sex) + (2.2 × malignancy). In the test cohort, the model performed with a c-index of 0.75 for all bleeding complications and 0.82 for major bleeding complications. When applied to the validation cohort, however, there was a moderate loss of predicative power of the model, but risk categorization of patients remained clinically useful.

Though easy to apply, there are limitations to the Kuijer et al. model. Only patients with malignancy can achieve a score that would classify them as high risk. With only 3 risk factors utilized, bleeding risk stratification in clinical practice may be limited. For example, an 85-year-old man and a 60-year old woman without malignancy would both be considered at intermediate risk of bleeding in the Kuijer et al. model. Additional variables that may modify risk, such as history of bleeding, are not included. In addition, by being assessed in a cohort of patients being treated for VTE with a limited follow-up observation period, this model may not be applicable to patients with AF being assessed for long-term anticoagulation.

Kearon et al. (2003)

In the Extended Low-Intensity Anticoagulation for Thrombo-Embolism (ELATE) trial, Kearon et al. prospectively compared low-intensity warfarin (target INR 1.5 to 1.9) therapy with conventional-intensity warfarin (INR 2.0 to 3.0) in 738 patients with unprovoked VTE. The authors found that low-intensity warfarin was associated with more recurrent VTE without any reduction in the risk of clinically important bleeding [17]. They also assessed predefined clinical variables for their association with bleeding: age ≥65 years, prior stroke, peptic ulcer disease, prior GI bleeding, creatinine >1.5 mg/dL, anemia or thrombocytopenia, liver disease, diabetes mellitus, and antiplatelet therapy. The authors found that the rate of bleeding increased with accumulating risk factors.

In 2006, Gage et al. developed a bleeding risk model based on the predefined variables reported by Kearon et al. and validated it in a retrospective analysis of the National Registry of Atrial Fibrillation [18]. Gage et al. found that the Kearon et al. model predicted major bleeding in patients taking warfarin with a c-index of 0.66, slightly less accurate than the novel bleeding risk model HEMORR2HAGES (see below).

Interestingly, the variables that Kearon et al. described were not derived from a prospective cohort, but predefined in a process that the authors do not make explicit. A risk model derived directly from this prospective trial cohort could conceivably have a greater predictive accuracy, particularly since the duration of observation was long (average 2.4 years) and data collection complete. Though including a much smaller cohort, the quality of data from a prospective trial such as the ELATE trial is likely superior to that derived from retrospective registries and databases.

Shireman et al. (2006)

In 2006, Shireman et al. published a bleeding risk model for elderly patients using data retrospectively gathered from the National Registry of Atrial Fibrillation [19]. They included patients discharged from the hospital between April 1998 and March 1999, and between July 2000 and June 2001 with a diagnosis of AF, who were at least 65 years of age and were receiving warfarin at the time of discharge. The defined follow-up period was 90 days, and the primary outcome was hospital admission for either GI hemorrhage or intracranial hemorrhage.

In a cohort of 26,345 patients meeting inclusion criteria, a derivation cohort was randomly selected for bleeding risk model development. Eighteen variables were considered for inclusion, with 8 included in the model based on a selection process that the authors do not make explicit (Table 1). The risk factors were age >70 years; gender; remote bleeding; recent (during index hospitalization) bleeding; alcohol/drug abuse; diabetes; anemia; and antiplatelet use. Each risk factor was given relative weights in an equation by which a risk score could be calculated, with stratification into low, moderate, and high risk categories.

The model was then applied to an alternate sample of patients in the validation cohort of 6470 patients, in whom 1.5 % experienced the primary outcome of admission for major bleeding. In this cohort, the model performed with a c-index of 0.632, with statistically significant differences in bleeding rates between low, moderate, and high risk patients. The authors also applied the Kuijer et al. model to their validation cohort, producing a c-index of 0.503, representing “no discrimination.” The OBRI model was also applied and had a c-index of 0.613.

The Shireman et al. model has the advantage of being derived from a large cohort of patients aged ≥65 with AF receiving warfarin therapy. However, the primary limitation is its complexity, requiring 8 clinical risk factors used in a relatively complex formula. Additionally, with a modest c-index, the Shireman et al. model did not perform as well as other risk prediction models in their respective validation cohorts.

HEMORR2HAGES (2006)

In 2006, Gage et al. published a new bleeding classification scheme for elderly patients with atrial fibrillation. Combining bleeding risk factors from 3 preexisting bleeding risk models, they devised HEMORR2HAGES, assigning 2 points for a prior bleed and 1 point for each of the following risk factors: Hepatic or renal disease, Ethanol abuse, Malignancy, Older (age ≥ 75 years), Reduced platelet count or function (including aspirin use), Rebleeding risk (ie, history of bleeding), Hypertension (uncontrolled), Anemia, Genetic factors (CYP 29 single nucleotide polymorphism), Excessive fall risk (including neuropsychiatric impairment), and Stroke (Table 2) [18]. The authors then used data from the National Registry of Atrial Fibrillation [20] to apply the HEMORR2HAGES scheme to 3791 Medicare beneficiaries with AF, assessing for hemorrhage incidence using Medicare claims.

Table 2 HEMMORR2HAGES bleeding risk

The authors found that HEMORR2HAGES accurately predicted bleeding risk in this cohort, with c-indices of 0.67, 0.72, and 0.66 in patients taking warfarin, aspirin, or neither, respectively. They also found that when compared with the preexisting bleeding risk schemes (OBRI, Kuijer et al., and Kearon et al.), HEMORR2HAGES was the most accurate predictor of bleeding in the cohorts of patients taking aspirin or warfarin.

The major limitation of this model is its complexity, with at least 13 factors to consider, some of which may be subjective or difficult to ascertain (excessive fall risk, ethanol abuse, or genetic factors). In validating their risk model, the authors were unable to capture data on the presence of genetic factors, yet this variable remains in their risk model. In addition, in including neuropsychiatric impairment under excessive fall risk, the authors reference a study by Gage et al. that evaluated neuropsychiatric impairment, which was defined as schizophrenia, dementia, or Parkinson disease [21]. Though exhibiting a trend toward increased risk, as an independent risk factor, neuropsychiatric impairment was not significantly associated with intracranial hemorrhage. Fang et al. demonstrated a similar finding in the ATRIA cohort (see below), with no significant difference in major bleeding in those diagnosed with neuropsychiatric disease and those without [22].

RIETE (2008)

The Computerized Registry of Patients with Venous Thromboembolism (RIETE) is a multicenter, international registry of prospectively enrolled, consecutive patients with symptomatic VTE. In a retrospective analysis, Ruiz-Gimenez et al. presented a risk prediction model for major hemorrhage based on clinical variables that could be assessed in patients before the institution of anticoagulant therapy [23]. The study design involved a subset of patients assigned to the derivation cohort, from which the following clinical variables associated with major bleeding were identified: recent major bleeding (2 points), creatinine level >1.2 mg/dL (1.5 points), anemia (1.5 points), cancer (1 point), clinically overt PE (1 point), and age >75 years (1 point). Bleeding was classified as major when requiring a transfusion of ≥2 units of blood, retroperitoneal, spinal, intracranial, or fatal. Patients from the validation cohort were stratified into 3 risk groups (low, intermediate, high), with likelihood ratios for major bleeding of 0.03, 1.16, and 2.65, respectively.

The chief advantages of the RIETE risk model are its ease of application and derivation from a high-quality, multicenter prospectively-enrolled cohort. However, given that it enrolled only patients treated for VTE, it may not be broadly applicable to patients with AF. Additionally, with only 3 months of follow-up data, the long-term risk of bleeding was not assessed.

HAS-BLED (2010)

Pisters et al. developed the HAS-BLED risk model with the intention of providing a practical risk score to estimate the 1-year risk of major bleeding in a cohort of patients with AF [24••]. Patient data were analyzed retrospectively from the Euro Heart Survey on AF, a large, prospective database of ambulatory and hospitalized patients with AF in 35 member countries, with a 1-year follow-up assessment of medical records to determine survival and major cardiovascular events [25, 26]. Based on 3978 patients who completed follow-up, risk factors were identified in a multivariate analysis to derive the bleeding risk score HAS-BLED (Hypertension, Abnormal renal/liver function, Stroke, Bleeding history or predisposition, Labile international normalized ratio, Elderly (>65 years), Drugs (antiplatelet agents, NSAIDs)/alcohol concomitantly) (Table 3). Major bleeding was defined as any bleeding requiring hospitalization and/or causing a decrease in hemoglobin level >2 g/L and/or requiring blood transfusion excluding hemorrhagic stroke.

Table 3 HAS-BLED Bleeding risk score and bleeds per 100 patient years

HAS-BLED demonstrated good predictive accuracy for major bleeding in the overall Euro Heart Survey cohort (c-index 0.72), with increasing accuracy in the subgroups of patients on OAC and antiplatelet therapy concomitantly (0.78), not on any antithrombotic therapy (0.85), and on antiplatelet therapy alone (0.91). Less robust was its predictive accuracy for major bleeding in patients on OACs alone (0.69).

HAS-BLED has been validated in a retrospective analysis of 7329 patients participating in the SPORTIF (Stroke Prevention Using an ORal Thrombin Inhibitor in Atrial Fibrillation) trial [27••]. Its principle advantage is its ease of applicability in clinical practice. It has been recommended by the European Society of Cardiology and the Canadian Cardiovascular Society to assess bleeding risk in patients with AF [8, 28].

The studies used to derive HAS-BLED, however, have limitations. Pisters et al. note that 25 % of data regarding the occurrence of major bleeding during the follow-up period were missing. This may have contributed to the very low rate of major bleeding in this cohort, found in only 1.5 % of patients. Deriving risk factors from such a limited number of patients introduces the possibility of selection bias.

ATRIA (2011)

In the Anticoagulation and Risk Factors in Atrial Fibrillation (ATRIA) study, Fang et al. retrospectively reviewed the health records 9186 patients with atrial fibrillation on warfarin in the Kaiser-Permanente health care system of Northern California for incidence of and risk factors for major bleeding [22]. Patients with atrial fibrillation, risk factors, and major hemorrhages were identified through ICD-9 codes. Major hemorrhages were defined as fatal, requiring ≥2 U packed blood cells, or hemorrhage into a critical anatomic site, such as intracranial and retroperitoneal.

Using a subset of patients, the authors of the study developed a derivation cohort, from which they identified clinical variables associated with major hemorrhages. Five variables emerged: anemia (3 points), severe renal disease (3 points), age ≥75 years (2 points), any prior hemorrhage diagnosis (1 point), and diagnosed hypertension (1 point) (Table 4). The ATRIA model was applied to an alternate subset of patients in the validation cohort, performing with a c-index of 0.74 when collapsed into a 3-stratum (low, intermediate, high-risk) scheme—a higher c-index when compared with the other risk models applied to the same cohort.

Table 4 ATRIA bleeding risk model and bleeds per 100 patient years

Similar to the HAS-BLED cohort, the ATRIA cohort had a low rate of major bleeding at only 1.4 % annually. Furthermore, the authors note that several variables, such as blood pressure measurements, were not available in the database of ICD-9 diagnoses from which they derived the risk model. Thus, variables of greater discriminatory capacity and many modifiable variables (eg, uncontrolled hypertension) may not have been assessed.

Head-to-Head Comparisons

Apostolakis et al. compared the performance of 3 different bleeding risk models developed exclusively for patients with AF [29••]. The authors applied HAS-BLED, ATRIA, and HEMORR2HAGES to the vitamin K antagonist arm of the Evaluating the Use of SR34006 Compared to Warfarin or Acenocoumarol in Patients With Atrial Fibrillation (AMADEUS) study. AMADEUS was a randomized, noninferiority study comparing fixed-dose idraparinux with adjustable-dose vitamin K antagonists in patients with nonvalvular AF [30]. The principle safety outcome was clinically relevant bleeding, subdivided into major bleeding (fatal, intracranial, requiring transfusion of ≥2 units of erythrocytes) and nonmajor bleeding. A total of 4576 patients were randomized, with 2293 patients in the vitamin K antagonist arm. The trial was stopped early with a mean follow-up time of 10.7 months due to an excess of clinically relevant bleeding in the idraparinux arm.

For their post-hoc analysis, Apostolakis et al. applied 3 different bleeding risk models developed from cohorts of patients with AF to the vitamin K antagonist arm of AMADEUS to assess for their relatively accuracy in predicting bleeding events. Applying HAS-BLED, ATRIA, and HEMORR2HAGES, the authors found that HAS-BLED performed only modestly, with c-indices of 0.60 and 0.65 for clinically relevant bleeding and major bleeding, respectively. ATRIA and HEMORR2HAGES fared worse. In this cohort, an ATRIA score >3 (intermediate risk) was not significantly associated with a risk of any clinical bleeding.

Following recommendations for bleeding risk assessment with the HAS-BLED model by the European Society of Cardiology and the Canadian Cardiovascular Society, Lip et al. sought to compare the predictive accuracy of HAS-BLED with older models, in addition to the relatively recently described ATRIA model [31]. The authors retrospectively identified 7156 patients admitted to the hospital with nonvalvular AF between 2000 and 2010 at a single university hospital system in France. Anticoagulant treatment at discharge was determined by review of hospital records, and clinical variables, including bleeding outcomes, were obtained from a computerized coding system. In addition to HAS-BLED and ATRIA, the authors evaluated HEMORR2HAGES, the revised OBRI of Beyth et al., Kuijer et al., and Shireman et al. in the study cohort. All risk models performed modestly, with HAS-BLED having a c-index of 0.60—slightly higher than the rest, but not by a statistically significant margin. Using a popular though often criticized alternative analysis to c-statistics [32, 33], the authors showed that the net reclassification index (NRI) of HAS-BLED was statistically superior to the other risk models.

Implications for Clinical Practice

As alternatives to warfarin, novel OACs are attractive options for both clinicians and patients, with the potential for increased medication adherence and decreased health care utilization costs. Before initiating treatment, an informed discussion should include the risk of bleeding with the risk of thromboembolism and expected risk reduction with OAC therapy. With recent clinical practice guidelines recommending bleeding risk assessments to help guide therapy, bleeding risk prediction models are an important tool for clinicians.

There are several approaches that the clinician may employ in assessing the bleeding risk of an individual patient being considered for long-term OAC therapy. One approach would be to apply the HAS-BLED model to all patients, especially those with AF. In head-to-head studies, it has performed superiorly to other bleeding risk prediction models, though with only a modest predictive accuracy [29••]. Nonetheless, HAS-BLED uses a limited number of clinical variables that are relatively easy to derive, and as such has been recommended in clinical guidelines for the management of AF. For patients who are deemed high-risk by this model, instituting long-term OAC would have to be carefully considered, and in many patients, not recommended.

On the other hand, a patient-centered approach would first consider specific co-morbidities of the patient, followed by selection of a risk prediction model that takes these into account. In assessing a patient with a malignancy (eg, prostate cancer), selecting a risk prediction model that includes malignancy as a variable may be advantageous. If a patient has several co-morbidities, a risk model such as HEMORR2HAGES, which incorporates several variables, might better stratify bleeding risk. In patients with AF, bleeding risk models validated in cohorts of patients with non-AF indications for anticoagulation may not be applicable. Patients may be on aspirin at baseline, and a risk model that incorporates this variable may be a better option.

Some of the risk factors in these models are potentially modifiable (uncontrolled hypertension, alcohol abuse, concomitant antiplatelet agents), and clinicians should make aggressive efforts toward modifying the bleeding risk for a patient if it would lower the risk of OAC treatment. The use of antiplatelet agents such as aspirin concomitantly with OACs has been shown to significantly increase the risk of major bleeding—particularly intracranial hemorrhage—in patients with AF [34]. When considering initiating OACs for patients already taking daily aspirin, the clinician should consider the indication for antiplatelet therapy and balance the benefit of continuing this agent against the risk of increased bleeding.

Finally, studies have demonstrated that clinicians relying only on their clinical judgment fail to appropriately stratify bleeding risk, with a tendency to overestimate bleeding risk [11, 35, 36]. Given the overall low rate of major bleeding with OACs and the well-established benefits of OAC therapy in AF patients, the widespread application of bleeding risk prediction models is likely to result in more patients receiving long-term OAC therapy. Bleeding risk prediction models can be used to provide evidence of a patient’s low risk status, and provide the clinician with further justification to safely prescribe these agents. Intermediate, and even high, risk stratification does not necessarily preclude OAC if the benefits are great. However, in these patients, risk stratification should guide the clinician to provide for frequent follow-up and monitoring, and easy access to medical care. In patients for whom these provisions are not possible, the clinician may decide that the bleeding risk predicted by a model is underestimated.

Conclusions

Many clinicians have been reluctant to consider anticoagulation in patients with atrial fibrillation for fear of causing life threatening bleeding. Bleeding risk, however, is many times overestimated by clinicians and the clear advantage of anticoagulation is underappreciated. Use of algorithms to estimate thromboembolic risk and bleeding risk can help the clinician put these risk variables into the proper perspective and an informed treatment decision can be made with the patient. The bleeding algorithms may also suggest interventions, such as discontinuing low dose aspirin in patients without a strong indication for aspirin that may reduce the bleeding risk. These bleeding algorithms are an important component of the risk assessment of the new anticoagulants since antidotes to stop bleeding are not available for these medications.