Keywords

Introduction

With half of a million people living in the United States with end-stage renal disease (ESRD) today, it is hard to fathom that 75 years ago kidney failure was almost always quickly fatal. For most of the twentieth century, kidney failure was synonymous with mortality unless the patient’s kidneys somehow recovered. Multiple initial forays into the development of renal replacement therapies were unsuccessful, and it is probably true that many of these well-intentioned failures have been lost to medical historians. Peritoneal dialysis, hemodialysis, and kidney transplantation share a common theme in that their early development began with tentative applications and required multiple tweaks over a course of least two decades before widespread use of these therapies could begin. In addition, access to hemodialysis presaged problems with accessing kidney transplantation and foreshadowed the costs and hurdles from limited resources that continue to vex ESRD care today.

Because of the less technical machinery requirements and the lack of need for anticoagulants, peritoneal dialysis was the first successful renal replacement used in humans. Dr. Georg Ganter of the University Würzburg first reported on two cases of peritoneal dialysis in 1923, one being a woman with ureteral obstruction from uterine cancer. Unfortunately, with the rise of the Nazis, Dr. Ganter was forced into retirement soon after he nobly advocated for the rights of Jewish patients. By the time of his death in 1940, only 13 patients worldwide had been treated with peritoneal dialysis (Teschner et al. 2004).

Notable forays into developing hemodialysis technology also had roots in Germany, where in 1924, at the University of Giessen, Dr. Georg Haas became the first physician to try this therapy in human patients (Paskalev 2001). However, it would be almost another 20 years before it was used again by Willem Kolff in the World War II-ravaged Netherlands to treat patients with acute kidney injury (Kolff et al. 1944 reprinted 1997). Unfortunately, but understandably, with only a few hemodialyzer prototypes available to meet the needs of the many patients with kidney failure, careful patient selection was necessitated, leaving many to succumb until more dialyzers could be made (Blagg 2007).

It is commonly accepted that the first long-term successful kidney transplant was performed by Dr. Joseph Murray at the Brigham and Women’s Hospital on December 23, 1954. The transplant occurred between the Herrick brothers and could only proceed because the recipient and donor were proven to be identical twins. The transplanted graft lasted 8 years and subsequent identical twin transplants were successful. However, dealing with rejection episodes in non-homozygous donor and recipient pairs presented an enormous hurdle. Developing the means to avoid rejection, primarily with immunosuppressing medication, became the principal driver to widening the application of kidney transplantation. Thus, it took more than 20 years before kidney transplantation could assume its preeminence in ESRD care for all patients and not just those with an immunologically well-matched living donor.

Despite the inferior outcomes of unmatched kidney transplants in 1967, renal disease experts recognized the value of kidney transplant and positioned it at least in the same tier as dialysis therapy in terms of clinical importance. The report of the Special Committee on Chronic Kidney Disease chaired by Carl W. Gottschalk definitively established for practicing US doctors that transplantation and dialysis therapies for renal failure were no longer experimental, even though high rates of difficult to treat rejection were still common with transplant therapies. This report, although read only by select audiences, had an undeniable influence in revising Medicare legislation to include the End-Stage Renal Disease Entitlement in 1972, ensuring dialysis patient and kidney transplant patient coverage (Rettig 1991). In 1968, the Uniform Anatomical Gift Act was enacted which helped to better standardize the organ donation process which up to that time varied state to state (NCCUSL 1968). At approximately the same time, the first Organ Procurement Organization (OPO) in the United States, the New England Organ Bank, came into being. Other OPOs were established not long after this as transplant professionals tried to increase the availability of deceased donor organs. As the vast majority of hospitals in the United States did not and still do not have transplant surgery capabilities, OPO assistance in organizing resources for organ procurement was essential to its occurrence, and this continues to be true today. The additional duties of OPOs with regard to deceased organ donation are to provide comfort for surviving family and friends during the death process and to obtain and communicate critical medical information that may affect organ quality and allocation.

During this time period, kidney transplantation from deceased donors was a rare occurrence, especially when compared to today’s standard. It had been accomplished by Drs. Joseph Murray, the same surgeon who worked with the Herrick twins, and David Hume in 1962, but proving its superiority to dialysis therapies was far from being the case. Unlike the living kidney transplant scenario where immunologic matching was often easier because of genetically related family members, deceased donor transplants faced much more complicated routing if they were going to find an immunologically well-matched home. In addition, if the matching was not strong, the outcomes suffered significantly, and the recipient could be worse off than if they had remained on dialysis.

Many attempts were made to overcome the immune-mediated rejection of transplanted allografts, including severe bone marrow suppression with total body irradiation, 6-mercaptopurine, cyclophosphamide, and azathioprine (Starzl 2000). However, the specific advancement that finally tipped the scales in favor of kidney transplantation over dialysis was the introduction of cyclosporine in the late 1970s. The extract from the fungi Cylindrocarpon lucidum and Trichoderma polysporum was found to preferentially target T lymphocytes without the accompanying bone marrow suppression or organ toxicity as seen with azathioprine and cyclophosphamide (Dreyfuss et al. 1976; Borel et al. 1977). Thus, cyclosporine dramatically reduced rejection rates even for highly unmatched grafts, and it quickly became apparent that transplant recipients fared much better than their dialysis requiring cohorts (Port et al. 1993).

The improved outcomes of kidney transplant recipients, as well as the unscrupulous behavior by some who hoped to profit from developing an organ trade, prompted increased federal government inquiry and oversight (Sullivan 1983). In October 1984, through bipartisan efforts and sponsorship by Representative Al Gore and Senator Orrin Hatch, the National Organ Transplant Act (NOTA) was signed into law by President Reagan. The most straightforward accomplishment of this new legislation was the outlawing of buying and selling organs. More importantly it seized the opportunity to advance the infrastructure that was needed to allow deceased donor transplantation to grow. Thus, the Organ Procurement and Transplantation Network (OPTN), which acts as the main umbrella organization for transplantation in the United States, was created (Neylan et al. 1999). Today OPTN membership includes all transplant centers, OPOs, and transplant histocompatibility laboratories.

NOTA also allowed the adaptation of an already present program into more prominent infrastructure. This was the conversion of the Southeastern Organ Procurement Foundation (SEOPF) into United Network for Organ Sharing (UNOS). SEOPF was originally formed by a group of transplant professionals in 1968 with the goal of determining where deceased donor kidneys could best be utilized (Stegall 2017). However, as the number of patient awaiting a deceased donor kidney increased and the knowledge of immunologic matching improved, the complexity of this problem became daunting. In 1977, SEOPF became the first organization to use a computerized database named “United Network for Organ Sharing” to help with deceased donor kidney allocation. In 1982, SEOPF established a call center in Richmond, Virginia, to assist with organ placement in the same location of today’s UNOS headquarters. UNOS was formed in 1984, as a nonprofit organization, and was awarded the contract to operate the OPTN in 1986, and it has since been the sole entity to manage the contract (UNOS 2017).

With UNOS managing the OPTN , a transparent methodology was established for how all the processes behind deceased donor procurements and transplantations would be conducted. This included rules on how OPOs would operate and how waitlists for various organs would be constructed. Committees were established for each organ system and for other specific concerns to help manage the OPTN. A principal effort was directed toward developing a system that would maximize safe deceased donor organ usage for transplantation. Waitlist construction for each organ system for biologic reasons was and is still organized by blood type. In terms of deceased donor kidneys, other factors would be taken into consideration. Most importantly, the human leukocyte antigen (HLA) makeup of both donor and recipient has significant relevance in waitlist construction; thus the individual candidate rankings were and are frequently quite different even for donors of the same blood type. In 2007, DonorNet® was disseminated in the United States, and organ offers started to be made in a computerized fashion over the Internet. This allowed easier viewing of the specific match run for each organ offer and provided greater dissemination of information on both donors and potential recipients. In 2013, in an attempt to eliminate disparities in access for ethnic minorities and highly sensitized candidates, as well as provide comprehensive data about kidneys in an effort to guide transplant decision-making, the UNOS board approved the Kidney Allocation System or KAS. This new strategy went into effect on December 5, 2014, and contained nine major revisions to the kidney allocation policy with the goal of maximizing the utility of every donated kidney without diminishing access, particularly for high-risk groups.

Summary of the Kidney Allocation System Changes:

  1. 1.

    Waiting time will capture prior time spent on dialysis (section “Living Kidney Transplantation and Living Kidney Exchange Programs”).

  2. 2.

    Simultaneous local and regional offers of kidneys with higher parenchymal risk (i.e., KDPI score greater than 85%) (section “Geographic Considerations”).

  3. 3.

    Elimination of OPO-specific variances (section “Geographic Considerations”).

  4. 4.

    Elimination of the Payback Policy (section “Geographic Considerations”).

  5. 5.

    Kidney Donor Profile Index (KDPI) score used for allocation over the old definitions of SCD, ECD, and DCD (section “The Development of Calculators and the Reliance on the Kidney Donor Profile Index (KDPI) Score for Allocation”).

  6. 6.

    Longevity matching for the top 20% adult posttransplant survival candidates (EPTS score ≤20%) for kidneys with a 20% or better KDPI (section “Utility Concerns and the Estimated Posttransplant Survival Score”).

  7. 7.

    Sensitization addressed in a stratified fashion with special measures for the highly sensitized (section “The Development of the Calculated Panel-Reactive Antibody (CPRA) and the Very Highly Sensitized”).

  8. 8.

    Improved access for blood type B candidates using A2 and A2B donors (section “Improved Access for Blood Type B Candidates”).

  9. 9.

    Defining living donors by procurement not transplant (section “Living Donor Defined by Procurement”).

Living Kidney Transplantation and Living Kidney Exchange Programs

Living kidney transplantation is usually considered as the best option for any patient needing a transplant. However, the reasons today are somewhat different than they were during the early history of kidney transplantation. In the early years of kidney transplantation, immunologic matching of genetically close family members was given a strong preference over other therapeutic choices with the ideal option of having an identical twin or sibling as an immunologic match. Haplo-identical matches from parents donating to children, children donating to parents, or siblings donating to siblings were also given preference.

With improvements in immunosuppression, familial matching lessened in importance, and spousal and friend donation has become more common and is now highly encouraged. In addition, as the number of patients awaiting a kidney transplant increases, the need for access to a living donor transplant is of paramount importance, regardless of the degree of immunologic matching. Also, as the collective knowledge of kidney disease pathology and genetics has progressed, it has become possible to better standardize the evaluation process of living donors. The persistent pressure to increase the number of living donors has led many centers to consider using individuals as donors with medical conditions that would have previously disqualified them (i.e., obesity, hypertension, and age >60) (Rao and Ojo 2009).

Another method of overcoming the massive shortage of living donor kidneys is the invention and building of infrastructure for living donor exchange programs. This type of program was first proposed in 1986, when kidney transplantation had proven its superiority over dialysis for the definitive treatment of end-stage renal disease (ESRD) (Rapaport 1986). However, functional living kidney exchange programs in the United States did not become fully operational until the 2000s, with its strongest US proponents being transplant surgeons at Johns Hopkins Hospital in Baltimore (Akkina et al. 2011). In contrast, South Korea, which faced greater struggles with developing a deceased donor infrastructure than in the United States, began the earnest operation of a living donor exchange program as early as 1991 (Park et al. 1999). One of the reasons for the relatively slow adoption of living donor exchanges in the United States was that the original NOTA legislation of 1984 prohibited the profiteering from organ procurements. Thus federal regulations needed improved language to ensure that living donor exchange programs could function legally. This did not occur until 2007, with the Charlie W. Norwood Living Donation Act, which established that paired donation is not considered valuable consideration (an inducement to enter into a contract that is enforceable in the courts) (Akkina 2011). With this improved legislation and the widespread acceptance of the United States transplant community of its potential benefit, robust exchange programs are now operational with at least one of the programs managed by UNOS. It is fairly clear now that US exchange programs offer a very functional and usually successful solution for patients needing a transplant who have one or several medically and socially suitable but incompatible donor(s). The one area where the current exchange programs are less functional is if the recipient is extremely sensitized. For the extremely sensitized, the need for a large number of potential donors to find a compatible situation can be a daunting challenge and may require a number beyond the scope of today’s US exchange program enrollment. Because of the transportation issues involved in exchange programs, including the need to box organs and ship them on flights, local OPOs have played a critical role providing logistical support. Many of the same processes necessary with deceased donor kidney transplantation have had to be adapted to assist with living kidney exchanges. Yet despite the strong emphasis on living kidney donation, improved abilities to deal with immunologic incompatibilities, and a modest broadening of the living donor criteria, living donors made up less than 30% of the kidneys transplanted in the United States in 2017 (OPTN 2017).

Deceased Donor Kidney Scarcity and Waiting Time

The invaluable resource that deceased donors have provided was recognized at the outset of their use, but the ever-increasing disparity in the limited supply versus the ballooning demand has necessitated multiple informational campaigns to target the lay public (Chatterjee et al. 2015). Public policy adaptation to the precious resources of deceased donor organs has led to being able to designate oneself as an organ donor when applying for a driver’s license in all states and the District of Columbia as of 2017 (Department of Health and Human Services 2018). A look at trends does show that the number of deceased donor kidneys available to transplant has increased considerably during UNOS’s history. In 1988, there were slightly more than 4,000 deceased donors nationally, whereas in 2016, which was record year in deceased donation, there were just under 10,000 (OPTN 2016). Reasons for this increase are multifactorial and include education to the public on the benefits of deceased donor transplants, updated legislation to bar revoking of a deceased donor’s consent to donate made while the donor was alive, and standardization of practices on how deceased donor families should be approached. This increase, however, pales in comparison with the increased number of patients awaiting a kidney transplant not to mention all patients requiring dialysis. In 1988, the number of candidates awaiting a kidney transplant was 10,000. By October 2017, the number had grown to over 96,500 (OPTN 2017). In addition, the dialysis population in the United States has approached 500,000 patients by 2017 (USRDS 2017). Putting this information together describes the dominant trend in kidney transplantation need. Since the establishment of the OPTN , there has been an almost 2.5× increase in deceased donation, but there has also been a greater than ninefold increase in the number of patients awaiting a deceased donor kidney. The principal area of growth is the increased number of patients being listed at greater than 50 years of age (OPTN/UNOS 2008). Thus, the number of patients needing a deceased donor kidney transplant has always outnumbered the number of kidneys available, and the gap between the resource and demand is widening. Despite the concentrated effort toward raising awareness for living kidney donation, deceased donor kidney transplants have outnumbered living donor transplants in the United States by more than two to one over the last 30 years of OPTN data (OPTN 2017). Considering these realities, patients needing a kidney transplant and who do not have a living kidney option face an obligate wait of potentially many years for a deceased donor kidney.

Following ethical principles of equity (fairness), how long a particular candidate has been waiting for a kidney transplant has been consistent, and is often the deciding factor in allocation, with each year of time waited being worth a point and each second waited being added incrementally to a candidate’s score. How this time is accrued has changed dramatically with the latest revisions to the national kidney allocation policy in 2014. Historically, time accrual only began once two conditions were satisfied: (1) the candidate’s glomerular filtration rate was documented as at or below 20 cc a minute, and (2) the candidate had been listed by a transplant program. This meant that some patients might be on dialysis for a long period of time before being able to accrue allocation points. For patients who were diagnosed with ESRD on presentation, the functionality of getting onto a kidney transplant waitlist immediately could be an impossible endeavor. Many transplant programs refused to complete inpatient evaluations, necessitating the new ESRD patient return for outpatient appointments after they had left the hospital. A monumental change made with KAS is that for all candidates referred to transplant centers after initiating chronic dialysis, their waiting time would include all time since starting on chronic maintenance dialysis, as determined by the information on the Centers for Medicare and Medicaid Services form 2728 (OPTN 2014). This change was made to improve fairness in accessing deceased donor kidneys recognizing that many candidates were unfairly penalized by late referral to a transplant center.

Geographic Considerations

Initially, the allocation of organs from deceased donors was based on chance, with geographic location being the dominant or often the only consideration. Deceased donor kidney allocation today is still predominantly influenced by geography. The reasons for this are multifactorial and, in part, due to precedent. The United States is divided into 11 regions by UNOS (Fig. 1). Each region currently contains between 2 and 10 OPOs , with there being a total of 58 OPOs covering the United States and Puerto Rico. Each OPO covers a specific donor service area (DSA) which includes transplant centers and other hospitals in the area. Processes have been developed that are operational in all US hospitals so that patient deaths are referred to the local OPO for consideration of organ procurement. Certainly, in the codified rules of deceased donor kidney allocation, there is a preference for local use, i.e., within the same donor service area in which they are procured, to allow those organs to service the same community which provided them. Other factors that favor local organ use are concerns of long cold ischemia times engendered by travel and the general desire to have the same surgical teams responsible for the transplant to be involved in the recovery.

Fig. 1
figure 1

This map illustrates the 11 regions of the United States and Puerto Rico set forth by the OPTN. The largely geographic divisions help to facilitate transplantation and are each individually represented on the Board of Directors and all OPTN standing committees. Important to note, a portion of Northern Virginia is included in Region 2 and Vermont is divided into Eastern and Western halves being serviced by Regions 1 & 9 respectively

Between 1998 and 2000, the US Department of Health and Human Services amended NOTA to include the “OPTN Final Rule” in order to preference national use of procured organs over local use if the acuity of waitlisted patients warranted it (Smith et al. 2012; Stegall et al. 2017). The impact on kidney allocation was initially minor as an established alternate therapy in the form of dialysis was readily available; thus increased severity of illness could not be readily justified as a reason to transport kidneys nationally. Nevertheless, with the latest major changes in kidney allocation put into effect in December of 2014, there have been three modifications made to support the regional and national sharing of deceased donor kidneys. The impact of these changes created an increase of the number of deceased donor kidneys used outside the local DSA/OPO from 21% pre-KAS to 32% during the initiation of KAS, with regionally distributed kidneys increasing from 8.8% to 12.7% and nationally distributed kidneys increasing from 12.6% to 18%.

The three changes that increased travel of deceased donor kidneys with the implementation of the new KAS were the elimination of OPO-specific variances, the regional sharing of higher parenchymal risk kidneys based on the individual donor’s Kidney Donor Profile Index (KDPI) score being greater than 85%, and the regional and national sharing of kidneys to meet the need of highly sensitized waitlist candidates. OPO-specific variances, which existed pre-KAS, allowed routing of deceased donor kidneys in manners that did not follow UNOS rules and often had a strong localism aspect in their design (Weimer 2010). For example, the Gift of Life™, the OPO that services Delaware, the eastern half of Pennsylvania, and parts of New Jersey, directed deceased donor kidneys from the Harrisburg/Pocono area preferentially to patients listed at transplanted centers located close to these areas during the pre-KAS years. The effect of eliminating these variances does not impact regional or national sharing but does change routing within the OPO’s DSA. The impact of immediately channeling higher KDPI kidneys (KDPI >85%) regionally has resulted in more regional transport of these organs, with local transplantation rates dropping from 69.2% to 50.9% after KAS intiation. Local transplants of kidneys with best parenchymal quality, a KDPI between 0% and 20%, changed very little from 23% pre-KAS to 22% post-KAS (Stewart et al. 2016). KAS mandates regional sharing of kidneys for waitlisted candidates who had a Calculated Panel-Reactive Antibody (CPRA) of 99% or greater and national sharing for candidates with a CPRA of 100%. This change in the allocation rules under the KAS led to an initial bolus in transplantation for these broadly sensitized candidates and increased movement of deceased donor kidneys out of the local OPOs. Of note, the new rules removed the difficult to track, and difficult to enforce, Payback Policy which existed pre-KAS where OPOs would accrue a kidney “debt” to the OPOs from which they imported a kidney. This aspect of the KAS was the one component that would decrease travel of deceased donor kidneys.

Both in the pre-KAS and post-KAS, geography still plays a dominant role in accessing deceased donor organs and kidneys. Today, in many OPOs the median waiting time cannot even be calculated because the majority of listed patients have not and will not achieve transplantation. In other OPOs, access to transplantation is much easier with some areas of the country having median access to deceased donor transplant in as little as 1 year (SRTR & OPTN 2012; Zhou et al. 2018).

The Acquisition of Data Leading to the Expanded Criteria Donor Category

Early on in deceased donor kidney transplant, it was acknowledged that deceased donor kidney quality was variable, and kidney graft life could be impacted by certain donor factors (Kasiske 1988). With the creation of the OPTN , data collection via a registry of transplant recipients and donors was established. This Scientific Registry of Transplant Recipients (SRTR) began collecting data in October 1987, on every transplant that occurred in the United States. By 1993, there was voluminous data available that definitively demonstrated deceased donor kidney transplantation’s superiority over dialysis despite the increased short-term morbidity and mortality risk associated with the additive surgery. By day 117 posttransplant, death rates between remaining on dialysis and receiving a deceased donor kidney became equivalent, and by day 325, transplantation began to demonstrate a consistent widening and improvement in survival (Wolfe et al. 1999). Broadening of criteria for who could be a deceased donor occurred during the 1990s, but as deceased donors with more complicated medical histories became commonplace, it became increasingly clear that the outcomes of these higher-risk transplants did suffer. Discard rates of already procured kidney also began to increase among certain types of donors.

In October of 2002, OPTN policy began to distinguish Expanded Criteria Donor (ECD) kidneys to allow a specific routing of these organs and to encourage a greater use of them in the appropriate recipients. ECD donors were defined as any donor 60 years old or older or a donor aged 50–59 with two of the following: a history of hypertension, a serum creatinine greater than or equal to 1.5 mg/dl, or death resulting from a stroke. These factors were found to have an increased relative risk of graft loss of 1.7 in comparison with a well-selected Standard Criteria Donor (SCD) reference group. Five-year graft survival for ECD kidneys was 51% in comparison with 68% for non-ECD kidneys (Wynn et al. 2004). Between 2002 and December 2014, pre-KAS, there were four specific deceased donor kidney allocation groups:

  1. 1.

    Kidneys from donors younger than 35 years of age being preferentially allocated to pediatric candidates (implemented in 2005)

  2. 2.

    ECD kidneys allocated to recipients who consented to receive these organs

  3. 3.

    Donation after cardiac death (DCD) kidneys being allocated according to a sequence that valued placement within a local distribution to lessen cold ischemia time

  4. 4.

    All remaining SCD kidneys being offered to all candidates on the waiting list

The specific concept regarding the ECD kidneys was an acknowledgment that these organs did have a shorter graft life, but the waiting time to obtaining them would be shorter than SCD grafts.

The Development of Calculators and the Reliance on the Kidney Donor Profile Index (KDPI) Score for Allocation

Liver allocation in the United States underwent major changes in early 2002. Prior to 2002, the Child-Turcotte-Pugh score and the candidate’s location (home, hospital, or ICU) were the principal metrics used to define a candidate’s level of illness and thus his/her position on the waitlist (Christensen et al. 1984). It became commonly agreed upon among liver experts that there was a lack of objectiveness in these measurements in defining the degree of liver decompensation. Ultimately, the liver transplant community decided that the Model for End-Stage Liver Disease (MELD) score, which was initially only studied for risk of transjugular intrahepatic portal-systemic shunt (TIPSS) placement, was a far superior measurement and decided to use this score in liver allocation (Desai et al. 2004; Smith et al. 2012). This began the use of calculators, sophisticated mathematical formulas, in allocation and would soon be duplicated in the coming decade in deceased donor kidney transplantation.

Further data accumulation in the SRTR, along with improved statistical methodology and a refined consensus of what impacted graft survival, allowed transplant researchers to develop formulas for the relative impact of different factors in graft and recipient survival. This was first accomplished for liver transplantation in 2006, when Sandy Feng published what would be known as the Liver Donor Risk Index (LDRI) which incorporated both donor and transplant variables in predicting the likelihood of liver transplant success (Feng et al. 2006). After Dr. Feng’s publication, creating an analogous risk index for kidney transplantation became an objective for many researchers, and in 2009, Rao et al. published the Kidney Donor Risk Index (KDRI) (Rao et al. 2009). This initial KDRI score estimated the relative risk of posttransplant kidney graft failure for the average adult recipient of a deceased donor kidney. Specifically, it ranged in value from 0.48 to 4.2, and descriptively a kidney with a KDRI score of 1.30 would have a relative risk of graft failure of 1.3 times the median kidney from the study time interval (Rao et al. 2009; Friedwald et al. 2013). It was also analogous to the LDRI in that it used both donor and transplant variables and thus was not immediately appropriate for use in kidney allocation, since transplant variables could only be known after completion of a transplant. The initial transplant variables for the KDRI were the level of HLA-B and HLA-DR matching between the donor and recipient, the cold ischemia time, and whether a dual deceased donor kidney transplant was performed. Soon the KDRI was adapted to be exclusive to the ten donor variables to make it readily useable for allocation (Rao et al. 2009). There are six binary and four complex donor variables:

  1. 1.

    Whether or not the donor’s cause of death was stroke related

  2. 2.

    Donor history of hypertension

  3. 3.

    Donor history of diabetes

  4. 4.

    Donor hepatitis C status

  5. 5.

    Whether the donor is African-American or not

  6. 6.

    If the donor is a DCD donor

  7. 7.

    Donor height

  8. 8.

    Donor weight

  9. 9.

    Terminal serum creatinine level

  10. 10.

    Donor age

Variables 7 through 10 have a more complex impact on the score with the donor’s height in centimeters and weight in kilograms having a linear inverse effect on the score with taller and heavier donors having a lower score (Rao et al. 2009). For all donors weighing greater than or equal to 80kg, the impact of weight is equivalent and thus there is no further reduction to the KDRI for these donors. Terminal serum creatinine also has a generally linear inverse relationship with the KDRI, but the impact of values greater than 1.5 mg/dl is lessened somewhat recognizing that many of the high creatinine donors are a simple manifestation of acute and recoverable donor kidney injury. Finally, the impact of the donor age is the most complex with both the young and old donors having higher KDRI scores (Rao et al. 2009).

Kidney Donor Profile Index (KDPI) is a simplified scoring system mapped from the KDRI and has a range of values from 0% to 100%, with 100% being the most risky deceased donor kidney transplants and lower scores being associated with higher donor quality and increased expected longevity. The reference group to which each kidney is mapped is the population of all deceased donor in the previous calendar year. KDPI began to be reported on DonorNet® in June of 2013, and ultimately was incorporated into determination of kidney allocation in December 2014. It was immediately apparent that the KDRI and its subsequent offspring , the KDPI, provided a much more granular and consistent metric on kidney quality than the SCD/ECD dichotomy (Friedwald et al. 2013). Figures 2 and 3 depict graft survival for kidneys of various KDPI scores.

Fig. 2
figure 2

This graphic compares the estimated half lives (i.e. the time it takes for ½ of the grafts functioning at one year to subsequently fail) of different donor kidney grafts in terms of years. The numbers used are based off of OPTN data as of March 21, 2018. (OPTN/HRSA 2018)

Fig. 3
figure 3

The graphic illustrates both one-year and two-year estimated graft survival rates for donor kidneys based on their KDPI. The numbers used are based off of OPTN data as of March 4, 2016. (DonorNet 2018)

With the initiation of the KAS, similar to the previous allocation system, there are four distinct pathways for kidney allocation within the new scoring system, namely, Sequence A for KDPI less than or equal to 20%, Sequence B for KDPI greater than 20% but less than 35%, Sequence C for KDPI greater than 34% but less than 86%, and Sequence D for KDPI greater than 85%. Within each of the sequences, candidates are rank-ordered according to points granted for circumstances such as waiting time, sensitization, being a prior living organ donor, or being a pediatric candidate. The specific criteria for routing in each sequence are detailed in Table 1. As previously mentioned, higher-risk kidneys with a KDPI score greater than 85% are offered locally and regionally with the hope that this will enable appropriate routing of these organs which face high discard rates. For kidneys with a KDPI score less than 21%, the new allocation rules have a special provisions for these organs, based primarily on utility concerns, routing them to be used in specific candidates who are expected to have the longest posttransplant survival.

Table 1 This table shows a simplified summary of the current routing algorithm under KAS for deceased donor kidney grafts based on their KDPI as of March 2018. Within each of the sub-categories (e.g. local pediatrics) under a given sequence, transplant candidates are ranked in order of their allocation points

Utility Concerns and the Estimated Posttransplant Survival Score

As with allocating any scarce resource, two dominant principles have guided policy development in deceased donor kidney transplantation. Equity is a principal in which all candidates have a fair opportunity of accessing a resource, in this instance a deceased donor kidney. Utility is a principal based on the fact that society’s benefit will be different depending on how the scarce resource is distributed. Prior to the changes introduced with KAS in December 2014, the deceased donor Kidney Allocation System focused principally on equitable access with utility concerns only being prioritized for pediatric considerations, the Zero Mismatch Policy, and point boosts for specific HLA-B and HLA-DR matches. Research on models where utility is given the dominant weight in the allocation system was conducted by many, and it became increasingly clear that the number of life years gained in the allocation system dominated by equity was reduced in comparison with systems where utility was prioritized (Wolfe et al. 2008; Segev 2009).

The desire to alter kidney allocation with a greater focus on utility concerns became a perennial concern of the OPTN’s Kidney Committee starting as early as 2003 (Friedwald 2013). By this time, it was becoming increasingly apparent that the waitlist’s growth was predominantly among candidates greater than 50 years old with an increasing number being over 70 years old. Various proposals submitted to the OPTN’s Kidney Committee were rejected because they were overwhelming ageist (OPTN/UNOS 2008). One eventual driver to changing policy was the recognition that younger waitlist candidates who received inferior-quality deceased donor kidneys were likely to return to the waitlist pool and require retransplantation and thus further deplete the number of organs available. Therefore, in December 2014, KAS introduced longevity matching as a policy tweak in which there would be routing of deceased donor kidneys with a KDPI score of ≤20% toward adult candidates who had the best 20% estimated posttransplant survival (EPTS) score. This score is only for candidates 18 years or older and ranges from 0% to 100%, with higher scores having a worse posttransplant survival. Since data on the entire waitlist is required to generate a score, it utilizes a web-based calculator that requires input of four pieces of information. The date of birth of the candidate and the start date of chronic maintenance dialysis, if the candidate has started, are the two date variables required. The other two variables are binary in their effect on the score and are the candidate’s diabetes history (either diabetic or not diabetic) and the candidates prior transplant history (either no prior transplants or a prior transplant). In its current form, the web-based calculator does allow the specific number of transplants to be entered and gives three different diabetes options, but none of choices alters the score. These four variables were selected by UNOS Board of Directors due to their objectivity and simplicity in an attempt to increase transparency of the process for the general population (Clayton et al. 2014).

The impact of longevity matching in adult patients for the EPTS ≤20% has likely been siphoned somewhat by the increasingly common scenario where another organ transplant pulls a desirable and likely low KDPI deceased donor kidney. For example, 2017 was a record year for both liver-kidney and heart-kidney transplants with 739 and 187 being done, respectively (OPTN 2017).

Pediatric Candidates

NOTA’s initial language makes special provisions for pediatric patients, and there are multiple stakeholders in pediatric care that have lobbied for the protection of children and have placed their welfare as an objective of paramount importance. It also has been accepted by the transplant community that the benefit that a child can receive from an organ transplant may have long-standing health consequences over that individual’s life and thus lead to a considerable gain in quality life years. Thus, the allocation system has consistently awarded children candidates with 4 points (i.e., 4 years of time) for those 10 years old or younger, with an additional point added if the donor has a KDPI score of <35% (OPTN 2018b). For those candidates between 11 and 17 years of age, 3 points have been awarded (Neylan et al. 1999; Smith et al. 2012; OPTN 2018b). Pre-KAS, deceased donors under the age of 35 years old were specifically directed toward pediatric recipients. Under KAS, preferential pediatric access is maintained, but routing is directed by a KDPI <35 instead of using donor age (Friedwald et al. 2013). Data post-KAS implementation has demonstrated only a modest negative effect on pediatric candidates’ access to transplantation, despite many changes that would advantage adult candidates (OPTN 2016).

Early Immunologic Concerns, the Development of the Zero Mismatch Policy, and HLA-DR Matching

The surgical technique of kidney transplantation surgery was resolved long before the immune system’s response to receiving another human being’s organ was understood. The history of early kidney transplantation even a decade after the successful Herrick twin transplant was fraught with frequent failures that would be considered disgraceful by today’s standards. Many early kidney transplants were lost due to preformed antibody against the donor that could not be recognized at the time (Kissmeyer-Nielsen et al. 1966). Going across blood groups was something that was occasionally tried and sometimes successfully, but the majority of researchers in the field abandoned these endeavors in the 1960s (Starlz 2000). In addition, it was becoming increasingly apparent that preexisting antibodies could lead to early graft loss even if blood typing was convincingly compatible and the surgical technical was flawless (Starzl et al. 1964). The human leukocyte antigen (HLA) was first discovered in 1958, but its true characterization continues to be a daunting challenge to researchers even today (Dausset 1958; Terasaki et al. 1965). Initially, what was simpler and easier to accomplish was to figure out if an immediate reaction was likely, and this could be done by mixing donor white cells with recipient serum (van Rood et al. 1958; Patel and Terasaki 1969). Over time, characterizing the HLA became increasingly possible, and with improved understanding of this point of high variability, its importance in kidney graft survival when well-matched was undeniable (Mickey et al. 1971). In addition, it became increasingly apparent that certain patients were likely to have multiple antibodies to different HLAs and this presented an immunologic barrier to safe transplantation.

The importance of HLA matching was well known to the OPTN upon its creation, and in 1987, UNOS mandated sharing of HLA-A, HLA-B, and HLA-DR matched deceased donor kidneys as a major utility measure designed to prolong kidney graft survival. Curiously, the technology behind class II HLA (-DR and -DQ) typing had at least a 25% rate being inaccurate at that time (Burlingham et al. 2010). However, with improvements in polymerase chain reaction (PCR) technology and a better understanding of the HLA, UNOS was able to revise its mandated matched sharing policy in 1995, to adapt to possible situations in which there might be HLA homozygosity at one, two, or three of the loci (Leffell and Zachary 1999). Thus, this new policy required obligate sharing of deceased donor kidneys when there was an instance of zero ABDR mismatches (0-MM) between the recipient and donor at the HLA-A, HLA-B, and HLA-DR loci (Leffell and Zachary 1999). During this time, more than 15% of deceased donor transplants nationwide were allocated and transplanted under the Zero Mismatch Policy (Burlingham et al. 2010). In addition, points were awarded for the quality of HLA-B and HLA-DR matching, with the maximum amount of points being seven for 0-MM at these four alleles (Leffell and Zachary 1999; Neylan et al. 1999). The Zero Mismatch Policy did lead to increased travel of kidney and longer cold ischemia times. The Payback Policy also in effect mandated that for every 0-MM kidney that traveled, there was a likely payback kidney that returned to the donating OPO. However, studies of transplant outcomes of these traveling kidneys were favorable in terms of overall survival despite the increased cold ischemia times. The Zero Mismatch Policy was also an avenue for the more sensitized patients to be transplanted with 47% of the grafts going into patients who had panel-reactive antibodies of ≥80% (Stegall et al. 2002). In 2003 , when it became increasingly apparent that African-Americans were being particularly disadvantaged because of low likelihood for this group to receive any benefit from HLA-B matching, B matching points were eliminated (Gill 2011; Hall et al. 2011). HLA-DR matching, however, was maintained and continues to be in use today, with 0-MM at the DR loci being awarded 2 points and 1-MM being awarded 1 point and the majority being 2-MM and being awarded no points. In 2008, for multiple reasons including phenomenal growth of an aging part of the waitlist, UNOS decreased the 0-MM sharing obligations to exclude patients whose CPRA was less than 20% (Burlingham et al. 2010).

The Development of the Calculated Panel-Reactive Antibody (CPRA) and the Very Highly Sensitized

One of the most important changes in immunologic testing in kidney transplantation in the last decade is the transition from Panel-Reactive Antibody (PRA) to the more epidemiologically refined CPRA. The PRA test delivers a broadness of sensitization of a particular candidate and traditionally is reported as a value of between 0% and 100%, with candidates who are non-sensitized having values of less than 20% and most often 0%. Sensitized candidates typically have PRAs ≥20%, but there is clustering of candidates at the highest PRA values of >95% (Keith and Vranic 2016). The causes of sensitization are typically prior pregnancies in female candidates, prior blood transfusions, prior transplants, in rare instances infection or immunization, and prior tissue interactions such as from an islet transplant (Campbell et al. 2007). It was readily apparent that patient with PRA values ≥80% faced a considerable barrier to kidney transplantation. Thus, for at least two decades preceding KAS, candidates with PRA’s ≥80% were awarded 4 points in deceased donor kidney allocation (Graham 1995; Leffell and Zachary 1999).

The initial development of PRA tests was recognized as not necessarily being reflective of the population of donors and lacked the sensitivity of future tests. Over time PRA panels improved in sensitivity and became increasingly reflective of donor population. In addition, improved understanding to the HLA allowed testing of potential recipient serum against specific antigens, allowing the characterization of antigens that should be avoided for a particular transplant candidate. In 2007, the United Network for Organ Sharing (UNOS) Board of Directors approved a measure by the OPTN’s Histocompatibility Committee to implement a new system of a Calculated Panel-Reactive Antibody (CPRA). The CPRA is based on the frequency of HLA antigens in approximately 12,000 United States deceased kidney donors from 2003 to 2005 (Cecka 2010). The score is calculated based on the percent chance of a positive crossmatch between a donor and recipient based on the known unacceptable HLA antigens for a recipient. A calculator for transplant professionals is available on the OPTN website to give the percent value for the avoids listed (Calculator 2018). In effect, the entering of CPRA avoids creates a path through which compatible crossmatches are much more likely to occur. Initially, the CPRA did not allow HLA-DQ and HLA-DP avoids to be reported, and this led to some unanticipated positive crossmatches (Singh et al. 2016). These loci have subsequently been added, but the allele expression of avoids is still imperfect, and thus positive physical crossmatches are still possible in that most deceased donors only have low to medium resolution typing. Virtual crossmatches are now frequently done by tissue typing labs before a kidney is shipped any distance to minimize the possibility that it be destined for a candidate for whom it is incompatible.

When the CPRA calculator was first introduced, credit for sensitization was Boolean in that only patients with a CPRA ≥80% would receive 4 points so there was understandable concern that certain patients who had been characterized as highly sensitized in the old PRA system would lose points. This was in fact the case for roughly 12% of highly sensitized patients by PRA values at the time (Cecka 2010). However, the converse was also true in that for the moderately sensitized by PRA (20–79%), roughly 20% were discovered to have a CPRA ≥80% (Cecka 2010). The CPRA system which required reporting avoids dramatically changed match runs for any specific deceased donor kidney in that in the prior PRA system, all the highly sensitized candidates were often on the top of every match run and were only removed following testing. These changes had a stifling effect on using desensitization to access a deceased donor kidney in that if desensitization was successful in dropping CPRA avoids below the ≥80% threshold, the 4 point boost on the candidate’s rank would be lost, and the candidate place on any match run for any organ would also drop similarly (Singh et al. 2010). This effect has persisted through KAS and desensitization for deceased donor kidneys are rarely pursued today. Ultimately, these concerns were replaced by a respect for the new technology that eliminated many positive crossmatches. In addition, with changes introduced with KAS, a graded boost in points for entering of CPRA avoids, sensitization transitioned from being an obstacle to organ access to often a driver to improve access.

Historically, highly sensitized candidates have waited considerably longer than non-sensitized candidates. The pre-KAS Boolean sensitization points did help highly sensitized candidates, but it did so in a fashion that was strongly preferential to the group of patients whose CPRA was between 80% and 84% (Cecka et al. 2011). Instances where individuals with a CPRA of >98% were offered a transplant were extremely rare and, if they occurred, were often contingent on a 0-MM kidney being available (Stegall et al. 2017). To help address this issue, KAS in December 2014 implemented a continuous, graded sliding scale for all candidates with a CPRA ≥20% (Friedwald et al. 2013) (Table 2). Under the new sliding scale, candidates with a CPRA of >90% would receive a significantly greater amount of points, ranging from 6.71 for 90% to 202.1 for CPRA of 100% (Formica et al. 2014). Other notable changes included access to regional and national sharing for a CPRA of 99% and 100%, respectively. Early statistical analysis of OPTN kidney transplant data also demonstrated an immediate success of increasing the proportion of transplants for individuals with a CPRA of 99–100%, increasing from 2.4% pre-KAS (December 2013–2014) to 13.4% post-KAS (December 2014–2015). However, during the same time period, proportional transplant rates for candidates with CPRAs of 0–79% and 90–94% experienced moderate declines, while individuals with a CPRA between 80% and 89% experienced a severe decline of greater than 60% from pre-KAS to post-KAS (Stewart et al. 2016).

Table 2 At the time of writing, the table shows the current number of allocation points awarded to an individual based on their CRPA score. Compared to pre-KAS where 4 points were awarded to all transplant candidates with a CPRA of ≥80, under KAS, potential transplant candidates receive points based on a continuous, graded sliding scale. These numbers are accurate based on OPTN policies as of March 1, 2018

Improved Access for Blood Type B Candidates

After the succession of failed kidney transplants across blood group barriers of the late 1950s and early 1960s, the OPTN organized deceased donor kidney allocation along blood group compatibilities. It soon became apparent that blood group AB recipients were significantly advantaged compared to other groups. Blood group A also fared better in comparison with blood groups O and B. Because of this, many type B candidates face a longer wait for a transplant. Minorities, especially African-Americans, make up a disproportionate amount of listed type B candidates when compared to the waitlist of other blood types. As the type B waitlist is composed of over 70% of minority populations, but makes up less than 15% of deceased donor kidneys available, UNOS has attempted on multiple occasions to address this disparity (OPTN 2018a). The first attempt to improve access to kidney transplantation for blood group B patients was in 2001, where UNOS policy dictated type B kidneys to be directed away from blood group AB recipients (with an exception being for cases of Zero Mismatch Policy) (Bryan et al. 2016). While this change in policy allowed a modest increase in transplantation of blood group B patients, blood type B patients still faced lower deceased donor kidney transplant rates compared to other blood types. Therefore, to better combat this problem, the new KAS implemented in 2014 allows for non-A1 A and non-A1 AB blood type kidneys to be transplanted into B candidates. Approximately, one fifth of blood type A is non-A1, most often A2. Non-A1 A and non-A1 AB individuals express significantly lower amounts of A antigen than normal type A1 individuals, allowing the safe use of these organs in B candidates who are not sensitized against A antigen. This policy was enacted to increase the potential donor pool for type B candidates with a minor impact on transplant rates on A and AB candidates. A critical stipulation is that B candidates must also demonstrate consistently low anti-A titers of ≤1:4 every 90 days, with any recorded titer of ≥1:8 being considered prohibitively high (Bryan et al. 2016). Analysis of long-term (7 year) follow-up data from the Midwest Transplant Network OPO showed that B candidates that received an A2 or an A2B had non-inferior outcomes when compared to traditional B to B transplants. However, one important consideration is that if a B type individual who had received an A2 or A2B organ could only receive plasma from AB donors as plasma from a potentially sensitized B type source can initiate an antibody-mediated rejection.

Of all the changes made with KAS, the improved access for blood type B candidates has been the one area where transplant centers have truly struggled to build the necessary processes and infrastructure to advantage their blood type B waitlist. It is generally agreed that the results for non-A1 A and non-A1 AB into B are comparable to all other transplants if the blood type B candidate has a low A titer; however, this type of transplant requires an additional consent from the prospective candidate. Monitoring anti-A1 titers while the candidate waits on the list presents another challenge, and as of June 2016, only 18% of transplant centers have performed these transplants (OPTN/UNOS Minority Affairs Committee 2017).

Living Donor Defined by Procurement

Of the changes implemented with KAS, defining a living donor by the procurement surgery rather than by transplant of the organ has high symbolic significance but likely will have the least impact on actual transplant numbers. Historically, living donation was defined by the occurrence of a transplant. Unfortunately, there have been circumstances where a procuring surgery takes place, but a subsequent transplant does not happen. Kidney donors represent the overwhelming majority of living donors with greater than 95% of all living donors being of this type. The next most common living organ donated is a portion of liver, and by February 8, 2018, there have been 6,406 living liver donors in the United States recorded by the OPTN compared to the 145,629 living kidney donors. Thus, as living kidney donation relative risk for developing ESRD is approximately 7.9 when compared to matched controls who did not donate, this can present a significant problem if access to transplantation is unavailable to prior donors (Grams et al. 2016). Therefore, under the new KAS policy, a prior living donor is still awarded 4 allocation points if they ever need to be listed for a kidney transplant, but now they have the assurance that they will be considered a donor whether or not a transplantation has actually taken place after procurement (OPTN 2018b). Fortunately, the absolute risk of developing ESRD after living kidney donation is still much lower than that of the general population’s (90 per 10,000 vs. 326 per 10,000) (Abimereki et al. 2014).

Conclusion

Despite the many limitations of the prior Kidney Allocation System, it operated for nearly 30 years and facilitated close to a quarter of a million deceased donor kidney transplants. It did pose a considerable obstacle to patients who learned of their kidney failure late in the disease course, since it required listing at a transplant center before waiting time could be accrued. It also was overly simplistic in its characterization of deceased donor kidney quality using dichotomous descriptors instead of the numeric KDPI score. In addition to these areas of improvement, the new Kidney Allocation System also improved access for sensitized candidates and has provisions to improve access for blood type B candidates. KAS also has an improved focus on utility of the deceased donor kidney transplant directing the best kidneys into the best adult candidates without significantly compromising pediatric candidate access. KAS has also attempted to decrease discard rates by implementing local and regional offering of higher KDPI organs. Despite these changes, geographic iniquity is still extremely prevalent and remains a dominant determinant in access to deceased donor kidney transplantation.

Cross-References