Introduction

The widespread clinical application of potent immunosuppressive regimens has significantly improved graft outcome by facilitating grafting across histoincompatibility boundaries. Long-term graft and patient survival, however, have been obtained at the expense of impaired immune surveillance, affected by the presence of other factors influencing the net state of immunosuppression, such as human leukocyte antigen (HLA) mismatching, graft damage, and concomitant infection with immunomodulating viruses [1]. Failure to activate or expand protective immunity has resulted in a significant increase in the rate of hospitalization for infectious complications over recent years [2]. Moreover, transplant recipients are experiencing serious morbidity and mortality from agents whose pathogenetic potential in immunocompetent individuals is limited [1, 3, 4].

Viral infections are potentially severe complications of transplantation, as they not only induce specific diseases, but they also favor the development of allograft damage, opportunistic infections, and acute rejection [1]. Thus, considerable effort has been made to improve posttransplant viral infection control. Establishment of a viral monitoring program has gained consideration as a useful tool in achieving this goal, as it allows the identification of preclinical or early stages of virus-related pathology, evaluation of response to treatment, and characterization of specific risk cohorts [5, 6]. Clearly, viral infections to be included in such a surveillance program must be selected. The criteria for selection are severity of virus-related pathology, availability of a suitable monitoring test, and possibility of therapeutic intervention. In addition, aspects related to local epidemiology and peculiarities of the transplant cohort, such as immunosuppressive regimen and percentage of virus-naive individuals, must be considered.

Improved management of viral infections in the immunocompromised host is partly attributable to advancements in diagnostic virology. The onset of viral replication after transplantation is dependent on exposure to a given pathogen in the absence of a protective immune response due to immunosuppression. Thus, the parameter universally selected to monitor infections is viral load. The development and implementation of sensitive, specific, and reliable diagnostic assays that allow quantification of viral load has proved to be instrumental in augmenting the clinical utility of viral monitoring [68]. In addition, as infection control will ultimately depend on the restoration of a protective immune response, evaluation of specific viral immunity has permitted further characterization of subgroups of patients at high risk for disease development [9, 10] and the development of therapeutic strategies based on administration of antigen-specific T cells expanded in vitro from the memory pool [10, 11]. Finally, development and application of novel antiviral therapeutic agents has also greatly contributed to the successful management of viral infections after transplantation [12, 13].

Here we review the clinical applications of viral monitoring in the setting of pediatric kidney transplantation and discuss future directions in the field of antiviral surveillance and virus-associated disease prevention and management.

Viral monitoring

The number of viral infections relevant to transplantation is constantly increasing. On the basis of prevalence, severity, availability of diagnostic tests, and therapeutic intervention, the viruses presently monitored most frequently in kidney transplant (KT) centers, other than hepatitis viruses, include cytomegalovirus (CMV), polyomavirus BK (BKV), and Epstein-Barr virus (EBV).

Cytomegalovirus

CMV is the major infectious complication in KT recipients. Transmission can occur from a seropositive organ to a seronegative recipient, causing primary infection; alternatively, reinfection/reactivation occurs in some seropositive recipients. CMV can cause a variety of end-organ diseases in KT patients, including hepatitis, gastrointestinal ulceration, pneumonitis, retinitis, or CMV syndrome with fever and leucopenia [14]. In addition to directly attributable morbidity, CMV has indirect effects [15], including an immunomodulating activity likely responsible for increased risk of opportunistic infections and EBV-related posttransplantation lymphoproliferative disease (PTLD) [16] and a role in acute and chronic allograft injury, with an increased risk of rejection [16, 17]. It has been known for many years that the serostatus of donor and recipient at the time of transplant provides prognostic information about the risk of the recipient developing CMV disease [5]. Patients at greatest risk include CMV seronegative recipients of organs from CMV seropositive donors and patients receiving antilymphocyte antibodies [18, 19]. Children are particularly vulnerable, as many are CMV seronegative at transplantation.

CMV surveillance was instituted as early as the beginning of the 1990s, when cell-culture methods for detecting CMV infection, which required 2–4 weeks, were replaced by the shell vial assay [20], which allowed identification within 16–48 h of viral replication in polymorphonuclear neutrophils (PMN) by immunofluorescence staining of immediate to early pp72 antigen, as well as the pp65-antigenemia screening assay [21, 22] that provided a semiquantitative assessment of the number of PMN stained positive for the lower-matrix protein pp65. The latter, more rapid (6 h) and specific, test prompted systematic CMV monitoring and application of preemptive treatment with effective antiviral drugs [21, 22]. Since then, quantitative polymerase chain reaction (PCR) assays for DNA determination have gradually replaced antigenemia, as they are not as operator dependent. They have equivalent efficiency for detecting viral replication and preventing disease [18, 19], but standardization across different laboratories is a problem for both tests, and, at present, each center has to validate its threshold values. Additional difficulties may arise once the patients are discharged from the transplant center and are followed elsewhere: confusion in interpreting viral load modifications, possibly due to a change in the laboratory performing the tests, could lead to inappropriate therapeutic decisions.

Without some form of preventive therapy, symptomatic CMV infection occurs in a high percentage of renal transplant recipients [18]. The two strategies commonly employed for CMV prevention are preemptive therapy and universal prophylaxis [6, 13, 19, 22]. Both approaches have advantages and disadvantages. The preemptive approach, limiting antiviral drug administration only in patients with evidence of CMV replication, allows reduction in toxicity and drug costs but is more labor and diagnostic intensive and may favor indirect effects that occur in the presence of asymptomatic infection [23, 24]. Universal prophylaxis may have the advantage of limiting indirect effects and preventing reactivation of other viruses. In the adult KT population treated prophylactically, testing for CMV viremia is generally not performed [6]. In the pediatric cohort, however, pharmacokinetics and compliance issues prompt monthly viremia screening and careful assessment of signs and symptoms of CMV disease. In the case of preemptive therapy, monitoring is recommended weekly until month +3 [6]. In the presence of viremia levels above the determined threshold cutoff, therapeutic intervention is started and continued until one or two negative tests are obtained (Fig. 1). During this phase, monitoring is performed once/twice a week. After therapy discontinuation, monitoring may be substituted by secondary antiviral prophylaxis, especially in cases of recurrent viremia.

Fig. 1
figure 1

Cytomegalovirus screening program in use at the Pediatric Kidney Transplantation Unit of the G. Gaslini Institute, Genova

Recently, a management question has arisen regarding the high-risk D+/R− cohort. The wide use of prophylaxis in D+/R− patients has resulted in an increased incidence of late-onset CMV disease after treatment discontinuation [13, 25]. Awareness of this occurrence is crucial for pediatric recipients, who are more likely to belong to the high-risk D+/R− cohort, in order to detect early signs of disease after prophylaxis discontinuation and provide a means to prevent disease onset. Among the possible solutions, it has been proposed to prolong prophylaxis duration from 3 to 6 months or more [26], to apply a 14-day delay in the start of prophylaxis, to allow for development of specific immunity [27], and to perform CMV monitoring beyond day + 100–200 until month +12 [28]. Regarding the latter point, a study conducted in 364 D+/R− transplant recipients receiving 100-day prophylaxis has recently shown that biweekly monitoring before day + 100 and subsequent testing at months +4, +4.5, +6, +8, and +12 was of little help in predicting disease onset [28]. However, surveillance intensification after prophylaxis discontinuation may be of help in preventing CMV disease. At present, no prospective, randomized, controlled trial has provided clear indications as to which, between preemptive or prophylactic treatment, is the optimal approach, and the choice is mostly dependent upon institution and available resources. However, prophylaxis is the preferred option in many centers for the high-risk D+/R− cohort. Finally, monitoring may be further refined by systematically evaluating viral load dynamics and constructing mathematical models that allow the prediction of viral replication evolution on the basis of the first few monitoring samples [29]. This could allow a prospective individualization of treatment that would optimize outcome.

Intravenous ganciclovir has long been the most commonly used agent for intense prophylaxis in pediatrics. The use of orally administered ganciclovir as an alternative in children has been limited by low bioavailability, necessitating large and frequent doses [30]. Valganciclovir, the valine ester of ganciclovir, has an oral bioavailability approximately tenfold greater than ganciclovir, and a recent study showed that administration to pediatric solid-organ transplant (SOT) recipients using a dosing algorithm adjusted for body size and renal function provided ganciclovir exposures similar to those established as being safe and effective in preventing CMV disease in adult transplant recipients [31]. In the case of CMV disease, it is still recommended that pediatric patients be treated with ganciclovir i.v. due to the lack of efficacy data on oral therapy in the pediatric cohort. Viral load assessment is a good predictor of response to treatment [18, 28], as decreasing CMV loads are associated with disease resolution, whereas persistently high or rising loads may indicate drug resistance. In this regard, assessing viral load by antigenemia has a major drawback compared with using DNAemia: antigenemia has a poor correlation with virus replication. This was demonstrated by showing a paradoxical rise in antigenemia and parallel decrease in DNAemia and viremia occurring at times in patients treated with ganciclovir [32].

Since the advent of extensive use of oral prophylaxis, cases of ganciclovir-resistant CMV due to mutations in the UL97 gene encoding for an enzyme involved in drug activation, or the UL54 CMV DNA polymerase gene, have increasingly been reported, particularly in the D+/R− population [33]. In case of proven ganciclovir resistance, treatment alternatives include the use of foscarnet or cidofovir, which may, however, exert significant nephrotoxicity.

Polyomavirus BK

BKV-associated nephropathy (BKVAN) is the most challenging infectious cause of renal allograft dysfunction and graft loss [3437]. BKV transmission occurs during childhood. After primary infection, renal tubular epithelial cells and the urothelial cell layer represent sites of viral latency or replication [38]. BKVAN represents a complication linked to high-rate virus replication in the grafted kidney [8, 3941]. The prevalence of BKVAN ranges from 1% to 10%, with approximately three quarters of cases occurring within the first year posttransplantation [34, 42]. The prevalence in pediatric recipients is similar to that observed in adults [4349]. Renal allograft loss occurs in 10% to >80% of cases [34, 35, 42, 43], and the highest rate of graft loss is observed in cases of late diagnosis or treatment failure [3537, 41]. To date, the therapeutic intervention of choice is reduction and/or switching of immunosuppressive drugs, because antiviral drugs – although showing some specific activity directed at the BKV life cycle in vitro – have not proven efficacious in controlled clinical trials [34, 4143]. Analyses conducted on patients transplanted in the last 10 years indicate that treating established disease leads to poor allograft outcome [3537].

Advances in the development of diagnostic tools and increased awareness of the importance of screening have led to treatment at earlier stages, before significant renal function deterioration has occurred. This has resulted in improved outcome [34, 35, 4143, 50]. The diagnosis of BKVAN requires evaluation of a renal biopsy, with demonstration of polyomavirus cytopathic changes and interstitial nephritis [34, 38, 41]. Given the focal nature of the disease, and possible overlap with other pathologies that complicate the posttransplant course, early diagnosis may be difficult to obtain. It has been shown that BK viruria precedes BK viremia by a median of 4 weeks and that BK viremia precedes BKVAN by a median of 8 weeks [51]. Thus, monitoring BK viruria, generally by urine cytology or quantitative PCR for viral DNA, and monitoring BK viremia by quantitative PCR allow identification of patients at risk of developing BKVAN. BK viruria and viremia are linked, but with increasing levels of viruria, the relationship loses linearity. Moreover, kinetics of viral load in urine and plasma are not always concordant. These clinical observations are best explained by a model in which BKV replication starts in the kidney and is amplified in the urothelial compartment, with partial reflux to the allograft [39]. Urine and plasma seem to be separate replication compartments, with plasma being directly linked to graft replication. Consequently, sustained detection of BKV replication, assessed as plasma loads by quantitative PCR, is the most predictive assay for the presence of “presumptive” BKVAN [34, 38, 40, 51]. For this reason, it is recommended by current guidelines as the best assay by which to guide preemptive interventions [50, 52]. Indeed, prospective studies conducted in both adult and pediatric recipients demonstrated the feasibility of preventing BKVAN and allograft damage by treating presumptive BKVAN on the basis of plasma DNA monitoring, with reduction of immunosuppression [51, 5355]. In the only pediatric cohort followed prospectively, immunosuppression reduction successfully induced BK viremia clearance in all 13 positive patients without BKVAN development or organ rejection [54].

Screening is essential to reach an early diagnosis and facilitate intervention for BKV replication and BKVAN. There is still debate as to which BKV screening test ought to be employed. Viruria has a high negative predictive value for BKVAN and may be usefully employed as a first-line screening test. Further advantages are that urine sampling is less invasive than blood sampling, that viruria may also be assessed by conventional cytology (decoy cells) and thus does not require molecular biology expertise, and that urine has a potentially lower PCR inhibitory activity than plasma, thus reducing the risk of false negatives for laboratories with no previous expertise. Anecdotal cases of kidney recipients with histologically proven BKVAN found negative for viremia have been reported [56] and have served as the rationale for some clinicians to drive their preemptive strategies based on viruria rather then viremia. However, the relatively low specificity of viruria as a predictive test for BKVAN prompts additional confirmation by viremia prior to a preemptive therapeutic intervention in order to avoid unnecessarily reducing immunosuppression in a large proportion of patients.

The screening methodology and schedule ought to be center specific and depend on local expertise and on the characteristics of the monitored cohort. It has been suggested to screen for BKV replication at least once every 3 months in the first year posttransplant, every 6 months up to 2 years, and annually thereafter for 5 years [52], and also in the event of an unexplained serum creatinine rise or after treatment for acute rejection [50]. Data obtained in pediatric cohorts suggest that BKV reactivation occurs earlier in children than in adults [54]. Thus, it is advisable to screen pediatric kidney recipients monthly in the first 3 months (Fig. 2). In the presence of viremia, BKV plasma load ought to be reevaluated at a 2- to 3-week interval to assess kinetics. A confirmed or rising BKV DNA load prompts intervention that may be guided by patients’ renal function. The plasma DNA level to be employed as a cutoff to start therapeutic intervention, as already mentioned for CMV, needs to be validated within the center, as BKV nucleic amplification tests (NAT) are not yet standardized. Data obtained in a prospective study indicate that the incidence of viremia is higher in pediatric patients [54], having been observed in >20% of the screened population. However, the outcome seems to be more favorable, as the prevalence of BKVAN and rate of graft loss have been reported to be similar to that observed in adults [34, 4346, 49]. Thus, in asymptomatic children with BKV DNA load above threshold, it may be reasonable to adopt a cautious approach to intervention and consider a therapeutic reduction of immunosuppression only in the presence of persistent viremia. In the case of renal dysfunction, allograft biopsy ought to be performed and treatment administered according to biopsy findings. In detail, if concomitant rejection is ruled out, a therapeutic reduction of immunosuppression may be started. Otherwise, it is advisable to treat rejection first, and then proceed with BKVAN treatment.

Fig. 2
figure 2

Polyomavirus BK (BKV) screening program in use at the Pediatric Kidney Transplantation Unit of the G. Gaslini Institute, Genova

Different immunosuppression reduction strategies have been proposed to treat presumptive BKVAN that have proved effective in clearing viremia and preventing onset of kidney damage [34, 42, 51, 53, 54]. In particular, we and others have chosen to reduce calcineurin inhibitors (CNI) as a first therapeutic step, followed by reduction/discontinuation of the antimetabolite 34, 35, 43, 54]. Other investigators propose reduction/discontinuation of the antimetabolite and only subsequently reduce CNI [34, 43, 53]. Others immediately reduce both CNI and the antimetabolite [55]. In the pediatric population, therapeutic reduction of CNI may be done slowly, starting with a 15–20% adjustment [54]. Alternatively to reduction, a switch from tacrolimus to cyclosporine A, or from CNI to a mammalian target of rapamycin (mTOR) inhibitor has also been employed [34, 43].

Epstein-Barr virus

PTLD are a recognized complication of the immunosuppression required to prevent allograft rejection and occur in 1–9% of kidney allograft recipients [57, 58]. Several factors greatly increase the risk of developing PTLD. EBV infection is critical in the pathogenesis of the majority of cases. In healthy seropositive individuals, a very tight balance exists between EBV-infected B cells and anti-EBV immunity, primarily EBV-specific, CD8-positive cytotoxic T lymphocytes (EBV-CTL) [59]. Thus, the degree of pharmacologic immunosuppression and/or HLA mismatching, and the absence of protective numbers of T cells, are major risk factors for PTLD [60]. The different combinations of these factors determine incidence variability. The highest incidence of PTLD is observed in children [6164], as two major risk factors for PTLD development are generally peculiar prerogatives of the pediatric cohort: namely EBV-naiveté and the presence of a heavily immunosuppressive environment.

The World Health Organization (WHO) has classified PTLD into four categories: early lesions; polymorphous PTLD; monomorphic PTLD; and classical Hodgkin’s lymphoma-type PTLD [65]. Monomorphic PTLD are similar to lymphomas observed in nontransplant patients, with the vast majority being B-cell lymphomas, although T-natural killer (T/NK)-cell, or even plasma-cell disease resembling myeloma, may occur rarely; up to a third of EBV-negative cases have been observed after SOT, especially among the late-onset forms [6167]. Most EBV-related PTLD reported in the literature are of host origin following SOT, whereas the source of EBV can be from the donor, recipient, or primary infection via natural oral transmission.

The diagnosis of EBV disease may be initially suggested by clinical history and physical examination in combination with imaging. The clinical course of PTLD after SOT is heterogeneous. Patients typically present with evidence of peripheral adenopathy, hepatosplenomegaly, and/or tonsillitis, and a history of diarrhea may be suggestive of gastrointestinal disease. In kidney recipients, there might be allograft involvement. Rarely, highly immunosuppressed patients may develop a fulminant disease with multiorgan involvement. However, clinical symptoms may be scarce, and the diagnosis of EBV-related PTLD should be considered in at-risk transplant recipients with fever lasting for more than a few days without an identified source. A tissue biopsy with histological assessment is needed for the diagnosis, although in febrile syndromes, specific tissue involvement may not be present, and in some patients, lesions may be inaccessible for biopsy.

As the onset of PTLD is preceded by a preclinical phase characterized by elevated EBV load in the peripheral blood, monitoring of EBV DNA levels in blood by PCR represents a useful tool for early diagnosis and timely treatment, although not every case of PTLD is associated with elevated EBV DNA. As with CMV and BKV, EBV quantitative assays are not standardized [7, 6870]. In addition, as successful therapy is associated with disappearance of detectable EBV DNA, assessment of viral load is useful to monitor treatment response [7, 6870]. Accordingly, for patients diagnosed with EBV-related PTLD and undergoing treatment, reduced EBV viral load in weekly peripheral blood monitoring is generally a sign of clinical response. However, it has been shown that disappearance of EBV DNA from peripheral blood, as seen after anti-CD20 monoclonal antibody rituximab treatment, may mask persistent disease [69]. Conversely, persistently high EBV DNA levels, particularly in the absence of clinical response, suggest the need for therapy modification. How long, or how frequently, to proceed with monitoring once the patient has responded to treatment is yet unclear.

EBV-related PTLD treatment is based on reducing the tumor burden with cytotoxic drugs [7174] and/or B-cell-directed monoclonal antibodies [12, 69, 72] while restoring virus-specific immunity by reducing medical immunosuppression [50, 57, 60] or delivering EBV-specific CTL [11, 59, 71, 72]. In SOT recipients, reduction/withdrawal of immunosuppression remains the gold standard for first-line PTLD therapy, although there is wide variation in the reported response rate, and monoclonal PTLD are less likely to respond [75, 76]. As a side effect of this therapeutic approach, nonspecific enhancement of immunity induced by reduced immunosuppression may increase the patient’s risk of developing allograft rejection. The role of interferon (IFN)-α, immunoglobulin treatment i.v., and antiviral drugs, possibly preceded by sensitization with the short-chain fatty acid arginine butyrate, is controversial [57, 72, 77]. Cytotoxic chemotherapy based on multidrug regimens conventionally employed to treat de novo B-cell lymphomas is associated with high response rates but also with severe treatment-related toxicity and increased susceptibility to infections [73, 74]. Rituximab monotherapy has shown a good toxicity profile, but the response rate in the only phase II study conducted to date that also included pediatric patients did not exceed 44% [12]. Encouraging preliminary results in terms of stable complete remission rates have been recently described using low-dose chemotherapy regimens in children who failed reduced immune suppression [71, 78]. Gross et al. obtained a 2-year, 73% overall survival with 69% relapse-free survival in 36 pediatric SOT recipients with a cyclophosphamide/steroid regimen [78], whereas an update of a study conducted by our group of reduced-dosage chemotherapy in conjunction with rituximab and infusion of autologous EBV-CTL in pediatric kidney recipients with disseminated PTLD shows a 100% disease-free survival at a median follow-up of 5 years [unpublished update of reference 71]. However, overall outcome of PTLD in SOT recipients undergoing conventional treatment strategies is still suboptimal.

In asymptomatic transplant recipients, EBV DNA identifies patients at high risk of developing PTLD [7, 68, 70, 79], although the correlation between high viral load and PTLD onset after SOT is not as linear as that observed in hematopoietic stem-cell transplantation recipients. Patients belonging to the SOT cohort may persist with high viral loads for many months without developing PTLD. Thus, the clinical significance of the high viral load carrier status is controversial [70]. In pediatric heart transplant recipients, Bingler et al. have shown a high propensity for PTLD development in high viral load carriers (45% vs. 4% in patients with low/absent EBV load) [80], whereas lower propensity was shown in pediatric small-bowel transplant recipients (11%) [81] and pediatric liver transplant recipients (3%) [82]. Available data in kidney transplantation are limited, but our group showed, in a retrospective cohort of 200 consecutive pediatric recipients, a 20% incidence of PTLD in the 25 patients with persistent high viral loads versus 0% in those with low/absent EBV loads [83]. Generally, high viral load carriers who develop PTLD are all EBV seronegative at transplantation. It is, therefore, recommended to restrict monitoring and a possible preemptive approach in order to prevent progression to EBV disease in high-risk patients, such as high viral load carriers who are EBV seronegative at transplantation [50]. At present, data on the efficacy of preemptive antiviral drugs are lacking, whereas it has been shown that reducing immunosuppression in EBV-DNA-positive pediatric liver recipients could prevent the development of PTLD, with PTLD incidence decreasing from 16% in a historical cohort to 2% in the study group [84]. Although reducing immunosuppression could be an effective measure to induce development of specific immunity in EBV-seronegative patients experiencing primary infection, preliminary data obtained from our group indicate that pediatric kidney recipients with sustained high viral load, especially late after allografting, do not seem to benefit from reduced immunosuppression [83]. In these high viral load carriers, alternative forms of treatment are warranted.

Other viruses

Recent advances in molecular microbiology have made it possible to diagnose a growing number of community-acquired viral pathogens that may cause significant morbidity and mortality in transplant recipients [4].

Respiratory viruses

Respiratory viruses are the most common community-acquired infections in transplant recipients [85], and in these patients, they tend to have a more prolonged and complicated course, with higher rates of pneumonia and bacterial and fungal superinfection. Detecting the specific viral pathogen is crucial for diagnosis, and although conventional methods such as immunofluorescence and enzyme-linked immunosorbent assay (ELISA) are still commonly used, molecular methods – in particular, real-time PCR – have proven significantly more sensitive [86]. Among respiratory viruses, respiratory syncytial virus (RSV), influenza viruses (IV), parainfluenza viruses (PIV), and adenoviruses (AdV) cause the most serious disease in immunocompromised hosts [85, 8789]. Complications include severe pneumonia (RSV, IV, PIV, AdV), and, in kidney recipients, pyelonephritis, hemorrhagic cystitis, and disseminated disease (AdV), although incidence is generally low and disease may be mild in this cohort. In addition, respiratory infections have been associated with acute graft rejection in pediatric SOT recipients [90].

Diagnosis of respiratory virus infection is generally made on nasopharyngeal wash or bronchoalveolar lavage fluid (BAL) specimens, or, in the case of AdV, on stools or plasma, by conventional viral culture, PCR, or direct immunofluorescence. Ribavirin has been employed in the treatment of RSV, PIV, and AdV infections [9193], although its use is proven only in RSV-related disease, whereas neuraminidase inhibitors were effective in controlled trials in adults and children with IVs [94]. Although no randomized controlled trials of therapy for AdV infection have been published, cidofovir given i.v. has been associated with successful outcomes [84, 95]. Regarding disease prevention, palivizumab (an RSV-specific monoclonal antibody) may prevent progression of upper respiratory tract RSV infection to lower respiratory tract involvement [96], whereas vaccination of patients, family, and health care workers may prevent IV in transplanted patients [97].

Herpesvirus 6

Human herpesvirus 6 (HHV-6) is emerging as a relevant pathogen after transplantation [98]. Its role in SOT recipients is incompletely defined, but reactivation after allografting has been associated with development of myelosuppression, interstitial pneumonitis, cholestatic hepatitis, gastrointestinal manifestations, and neurological illness [4, 99]. As HHV-6 replicates in CD4+ T lymphocytes, suppression of T-cell function predisposes the patient to its development and increases severity of other opportunistic infections, including hepatitis C infection and CMV [100, 101]. Distinguishing HHV-6-active replication from latency can be challenging due to the high prevalence of infection in humans. Molecular assays are the most commonly used laboratory methods to detect HHV-6 reactivation and replication after transplantation. The use of quantitative PCR assays on serum/plasma or, better, tissue biopsy [102] specimens may be helpful in distinguishing replicating from latent HHV-6. Moreover, PCR techniques, differently from serology, are able to distinguish between subtypes A and B. Variant B most commonly causes infection posttransplant [101, 103], but HHV-6A is the variant more often associated with neurologic manifestations, which represent the severest form of HHV-6-related pathology, having a mortality rate >50% [99, 104].

Successful treatment of HHV-6 disease has been reported with either ganciclovir or foscarnet, often associated with simultaneous reduction of immunosuppression, although at present no treatment has been validated in controlled trials [104, 105]. Some strains of HHV-6B have been found resistant to ganciclovir [106], and this may explain the prevalence of HHV-6B after SOT cohorts, which often receive valganciclovir prophylaxis for CMV infection.

Monitoring virus-specific immunity

Monitoring specific immunity has gained consideration as a useful tool in managing viral infections in the immunocompromised host. In association with viral load determination, quantification of the specific immune response [107] has proved valuable in characterizing subgroups of patients at high risk of disease development [9, 108110] and in assessing therapy response [58, 70, 75, 111]. Early demonstration that predicting virus-related diseases could benefit from combining viral load measurement with enumeration of specific T cells was obtained in EBV-seronegative liver recipients who developed PTLD posttransplant [9]. In this cohort, the inability to mount a specific immune response, measured as frequency of T cells able to produce IFN-γ in response to EBV-transformed lymphoblastoid cell lines, while experiencing a primary EBV infection, correlated with risk of developing PTLD. However, in many SOT recipients with PTLD, EBV-specific immunity is not apparently impaired, and the best immunological predictor of PTLD risk has not yet been identified.

In the setting of CMV infection, failure to control viral replication is associated with suboptimal CD4+ and CD8+ T-cell responses in terms of low numbers and impaired function. Specifically, the proportion of CMV-specific CD8+ T cells producing IFN-γ after renal transplantation has been shown to be a risk factor for the development of high-level replication and disease [109, 110], whereas high-level PD-1 expression on CD4+ and CD8+ T cells has been associated with CMV disease [112]. Similarly to observations for EBV and CMV, BK viruria and viremia development and BKV disease onset have been associated with impaired T-cell responses [10, 108, 113]. Moreover, resolution of BKV replication and disease prevention depends on recovery of BKV-specific T-cell immunity [53, 114]. Assessing BKV-specific T-cell frequency could allow identification, among patients with positive viremia, of those more likely to progress to BKVAN. Moreover, as preliminary data indicate that emergence of BKV-specific T cells coincides with reduced viral load and improved or stabilized graft function, it seems reasonable to manage therapeutic modulation of immunosuppression by complementing quantification of viral load with measurement of BKV-specific immunity [10].

Future directions

There is a need to develop assays that measure general infection risk. Technological advances facilitate accurate definition and quantization of virus-specific T-cell responses, and efforts are directed at developing high-throughput assays for measuring virus-specific cellular immunity, which may allow determination of individual risk for specific infections. The obvious choice for enumerating virus-specific T cells would rely on pulse stimulation with viral antigens and cytokine production assessment by either flow cytometry or ELISpot analysis. To date, the bottleneck in the development of such assays is the availability of standardized antigens, in particular, products that do not depend on specific HLA typing for presentation to T cells. Peptide pools derived from immunogenic proteins are the best option [54, 115], although these may not be yet available for all viral infections, and, even when available, the best combination of immunogenic proteins for each viral infection still needs validation in clinical trials.

Antiviral therapy is often limited by side effects, development of viral resistance, or weak intrinsic activity. Restoring a protective immune response by reducing immunosuppression, on the other hand, is burdened with increased risk of acute graft rejection or chronic allograft nephropathy. There is ample evidence that administering appropriately selected antigen-specific T cells can restore protective immunity and control established CMV and EBV infections [11, 116118]. Recently, this strategy has been transferred to the setting of organ transplantation [70, 119].

Conclusions

In conclusion, implementing monitoring strategies and applying preemptive treatment has profoundly changed the course of viral infections after transplantation. Individualizing patient management through novel approaches that take into account the kinetics of viral replication will be increasingly employed in transplant patients, and, in conjunction with evaluating the immune function, may offer an optimal strategy to posttransplant infection control. Prospective studies of combined virological and immunological monitoring are warranted to assess the potential of this strategy and identify the most suitable parameters for monitoring purposes.