NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Leas BF, Uhl S, Sawinski DL, et al. Calcineurin Inhibitors for Renal Transplant [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2016 Mar. (Comparative Effectiveness Reviews, No. 166.)
Below, we summarize the main findings and their strength of evidence. We then discuss the findings in relation to what is already known, applicability of the findings, implications for decisionmaking, limitations, research gaps, and conclusions. When we have graded evidence as insufficient, it indicates that evidence is either unavailable, does not permit estimation of an effect, or does not permit us to draw a conclusion with at least a low level of confidence. It does not indicate that a treatment has been proven to lack efficacy.
Key Findings and Strength of Evidence
Key Question 1
One small study with high risk of bias reported on clinical validity outcomes. The evidence from this study was considered insufficient to permit conclusions about the comparative performance of high-performance liquid chromatography (HPLC) versus immunoassay for clinical outcomes. The findings of eleven studies assessing analytical performance suggest that chromatographic methods are more accurate and precise than commonly used immunoassays at measuring CNI drug levels. However, it is unclear whether the differences identified in these studies are clinically meaningful such that they would change clinical management or affect patient outcomes.
Key Question 2
The findings of the studies composing the evidence base for this question showed low strength of evidence, suggesting that risk of biopsy proven acute rejection (BPAR) is similar between new renal transplants monitored at trough level (C0) and those monitored at 2 hours (C2). For the most part, the evidence for patient and graft loss and adverse events among studies comparing C0 to C2 monitoring in new renal transplants was inconclusive due to study limitations and imprecise findings.
However, low-strength evidence from one randomized controlled trial (RCT) indicated that C2 monitoring led to a significantly higher Cyclosporine A (CsA) mean cumulative dose increase than C0 monitoring. Low-strength evidence from this study also indicated that significantly more patients in the C2 group than in the C0 group experienced tremors. In contrast, low-strength evidence from one small RCT indicated that C2 monitoring led to significantly more CsA dose reductions than C0 monitoring among stable renal recipients.
The discrepancy of the findings related to CsA dose may be due to the difference in time post-transplant of patients in the studies. In one study, the patients were only 20-days post-transplant, whereas in the other study they were stable transplants, with 3 or more months since transplant. CsA levels tend to fluctuate more shortly after transplantation, and reaching target levels is often more difficult. In addition, the C2 target levels in the study examining newer transplants were somewhat higher than in the other studies that address this question. Target C2 levels in the other studies ranged from 1,100 to 1,400 μg/L compared to 1,500 to 2,000 μg/L in the study of newer transplants. Alternatively, the explanation may be the single-study evidence base for each conclusion; future studies could overturn these conclusions.
Key Questions 3a and 3b
Four types of immunosuppressive regimens designed to reduce calcineurin inhibitor (CNI) exposure were assessed. High- and moderate-strength evidence suggests that minimization strategies based on lower doses of CsA or TAC result in significantly better clinical outcomes than with standard-dose regimens and provide a superior combination of increased benefits and reduced harms than approaches using conversion, withdrawal, or avoidance. Low-dose therapy was associated with reduced risk for acute rejection, graft loss, and opportunistic infections. Minimization was also associated with improved renal function as measured by estimated glomerular filtration rate (eGFR). These benefits were associated with both CsA and TAC, and with adjunctive use of either mycophenolic acid–based therapy such as mycophenolate mofetil (MMF) or mycophenolic sodium (MPS), or mammalian target of rapamycin (mTOR) inhibitors, including sirolimus (SRL) and everolimus (EVR). High-strength evidence also indicates that minimization may be most effective when initiated immediately or shortly following transplant and may be less effective when implemented 6 or more months after transplant.
The evidence base addressing induction therapy used in conjunction with CNI minimization is inconclusive and needs further research, although studies suggest that use of induction therapy may not be necessary to achieve the improved outcomes associated with CNI minimization. We were unable to evaluate the role of induction therapy for conversion, withdrawal, or avoidance strategies because subgroups were too small for analysis due to heterogeneity of regimens and nonreporting of induction agent use. Additionally, induction therapy likely has limited clinical relevance to many of these studies because conversion and withdrawal strategies were usually initiated at least several months post-transplant, when the impact of induction treatment would be minimized.
Similarly, moderate-strength evidence indicated that conversion to an mTOR inhibitor or belatacept was associated with modest improvement in renal function compared to standard-dose CNI regimens. High-strength evidence also suggested that conversion to an mTOR inhibitor was associated with a decreased risk in the incidence of cytomegalovirus (CMV) infection. However, moderate-strength evidence suggested that conversion from a CNI regimen to MPS was associated with an increased risk of BPAR. The evidence for converting to an mTOR inhibitor was inconclusive for other outcomes, such as BPAR, patient death, and other infection-related adverse events. More controlled trials with longer followup may be needed to better understand the impact of conversion on longer-term outcomes, such as patient death and graft loss, and among higher-risk patients for these outcomes.
High- and moderate-strength evidence suggests that planned withdrawal of CNI may result in improved renal function but is also associated with increased risk of acute rejection. Risk for acute rejection was higher in studies that used either mycophenolic acid–based treatment or mTOR inhibitors. The evidence base was insufficient to support conclusions for most of the outcomes examined. An important question the studies we reviewed did not adequately address is the interaction between the timeframe of withdrawal and the emergence of adverse outcomes. If events such as acute rejection, graft loss, or infection occur very soon after withdrawal of a CNI and replacement with a non-CNI agent, we may conclude that the non-CNI agent is inferior. However, an alternative explanation may be that because withdrawal protocols include a period during which the CNI dose is reduced but not eliminated, an adverse event may result primarily from the use of a low-dose CNI regimen during the transition phase rather than the agent that eventually replaced the CNI. Conversely, if poor outcomes present several weeks or months after a CNI has been completely withdrawn, we may be more confident attributing the results to non-CNI therapy.
Avoidance strategies were examined in only nine studies, each of which used either SRL or belatacept as the primary alternative to CNI therapy. The evidence base for most outcomes was considered insufficient, although moderate-strength evidence suggests that belatacept is associated with improved renal function and, when standard-criteria donors are used, with increased risk of acute rejection. Further research on de novo avoidance of CNI treatment is necessary.
All these studies compared standard-dose CNI regimens with strategies designed to reduce CNI toxicity. Our review also identified nine trials that examined head-to-head comparisons between low-dose CNI and approaches that used conversion, withdrawal, or avoidance. Some of these studies suggest a beneficial effect on renal function associated with conversion, withdrawal, or avoidance. However, the studies are heterogeneous and enrolled small numbers of patients, and the overall evidence base is insufficient to draw conclusions.
Findings in Relation to What Is Already Known
Several systematic reviews have examined different aspects of CNI management in renal transplant patients. A recent survey of 76 laboratories providing immunosuppressant therapeutic drug monitoring describes the lack of standardization in laboratory procedures as a major factor impacting inter-laboratory variability.36 While HPLC is the gold standard for monitoring CNI, many laboratories do not use appropriate reference materials such as isotope-labeled internal standards to determine the true value of CNI concentrations.36 Levine and Holt, regarding proficiency testing of tacrolimus by 22 clinical laboratories, reported the following total error rate for each assay evaluated compared with exact matching isotope dilution mass spectrometry: 17.6-21.4% for CMIA, 28.0-33.4% for LC-MS, and 17.6-54.0% for ACMIA.131 The total error reported in their study was defined as 2 times the total coefficient of variation plus the average bias. Analytical assay comparisons for commonly used cyclosporine assays reported biases in the range of 29-57% for FPIA as compared with HPLC.132 Based on our review, selection of assay methodology for measurement of calcineurin inhibitors did not have an effect on clinical outcomes after renal transplantation, but this could be partially due to the bias between assay methodologies and lack of standardization in laboratory procedures.
On the question of C2 monitoring of CsA, one previous review examined studies comparing the clinical outcomes of patients on CsA-based therapy monitored with C2 levels to those monitored by C0 levels. Knight and Morris evaluated the evidence from trials evaluating the impact of C2 versus C0 monitoring on clinical outcomes among renal, liver, and cardiac transplant recipients.12 The evidence base for renal transplant recipients consisted of 13 studies, most of which were single-group pre-post studies. This review does not include these studies. However, despite differences in the evidence base, the conclusions that Knight and Morris drew were similar to those in this review. These authors found evidence that C2 monitoring was associated with detecting higher levels of CNI than C0, but no clear evidence that C2 monitoring affects renal function or acute rejection. Thus, Knight and Morris concluded that little evidence from prospective studies supports the theoretical benefits of C2 monitoring.
The other previous reviews focused on evaluating the benefits and harms associated with changing from a standard CNI regimen to an alternative regimen, specifically minimization and withdrawal,14,133 avoidance and withdrawal,134,135 and conversion to an mTOR inhibitor.136
Su and colleagues133 recently completed a systematic review of seven RCTs that examined CNI minimization or withdrawal with use of the mTOR inhibitor EVR. The alternative strategies used in these studies were associated with increased eGFR, lower serum creatinine, and no difference in graft loss or death. Low-dose regimens were associated with no difference in BPAR, while rejection risk was higher in studies that avoided CNI. Additionally, patients on EVR had lower risk of CMV infection but were at greater risk for nonfatal adverse events. Moore et al.14 reviewed 19 RCTs that evaluated CNI minimization or withdrawal with use of MMF or MPS. Minimization regimens were associated with improved renal function, as measured by GFR, and reduced risk of graft loss. No harms were increased in the minimization trials. Conversely, withdrawal studies were associated with greater risk of BPAR and improved GFR and serum creatinine. These results are consistent with our meta-analyses, which found significant benefits associated with low-dose approaches to CNI management, but lesser benefits and potential harms resulting from CNI withdrawal regimens.
Yan and colleague's review134 identified 11 RCTs of withdrawal strategies and 16 RCTs that used CNI avoidance. Early withdrawal and SRL-based avoidance were associated with improved renal function and no difference in graft loss, patient survival, or adverse events. These regimens also resulted in higher risk of BPAR at 1 year, but no significant differences were observed at 2 years after transplant. Bai and colleagues' very recent review evaluated seven RCTs that examined CNI withdrawal.135 Withdrawal from a CNI was associated with greater risk of acute rejection and thrombocytopenia but also with improved renal function and decreased risk of hypertension.
Lim and colleagues conducted a recent systematic review of RCTs comparing delayed conversion from CNIs to mTOR inhibitors versus remaining on CNIs.136 The overall evidence base for this review consisted of 27 trials; however, only 13 trials reported on outcomes of interest to the review and contributed to primary analyses conducted in the review. Most of these trials were included in the present review. The primary outcomes the Lim review analyzed included renal function (as measured by GFR), acute rejection, mortality, graft loss, and adverse events. Similar to the results in this review, Lim et al. found that patients converted to an mTOR inhibitor had slightly higher GFR at 1-year followup than patients remaining on a CNI. The results of their GFR analysis also indicated the presence of substantial heterogeneity (I2=68%) that was not explained by time post-transplant or type of mTOR inhibitor. Lim et al.'s findings also indicated that rejection risk was higher among patients converted to an mTOR inhibitor. Finally, like this review, Lim et al. found that conversion to an mTOR inhibitor was associated with fewer reported incidences of CMV. However, they indicated that discontinuation secondary to adverse events was generally higher among patients converting to an mTOR inhibitor.
Another important point to address is the safety of SRL as an alternative to CNI-based treatment. A recent meta-analysis by Knoll et al. examined the effectiveness and harms associated with SRL-based regimens after renal transplantation.137 They found a significantly increased risk of death associated with SRL use, in contrast to our review. However, their analysis included all SRL trials, not just SRL in the context of CNI minimization, and so the difference in findings is not surprising. However, these findings do suggest the need for more research on the safety of SRL.
Similarly, the ELITE-Symphony study4 reported that renal function improved in its low-dose TAC arm when compared with SRL, while the three head-to-head studies that we reviewed found that TAC was associated with poorer renal function compared with alternative SRL-based regimens. This inconsistency is likely attributable to differences in the patient populations and the adjunctive and induction therapies used in each study, suggesting that further research is needed to clarify the effect of these strategies on renal function.
Applicability
Five important factors limit the applicability of these findings to patient care, specifically when considering the evidence examining alternative regimens. First, most of the patients enrolled in the studies we reviewed were at average or low risk for poor outcomes, while populations at higher risk for graft rejection, infection, or adverse events are not well-represented in the evidence base. Many of the RCTs included in this review excluded highly sensitized populations, retransplants, and patients with significant comorbid conditions. These trials did not report socioeconomic status, and 21 studies excluded patients over age 65. No studies focused exclusively on graft recipients with demographic characteristics often associated with greater risk for acute rejection, such as African-Americans, and almost no studies stratified results by this factor or by age or immunologic risk. Additionally, we excluded studies in multi-organ transplant populations. Therefore, this evidence base may primarily represent the effects of alternative CNI regimens on average- or low-risk patients and may not indicate how changes in standard CNI regimens might affect higher-risk groups or other important subpopulations of renal transplant recipients.
Second, these RCTs implemented alternative CNI regimens as planned strategies in patients randomly assigned to treatment or control groups. Transplant recipients who required a regimen change due to CNI toxicity were not specifically studied in these trials and were not analyzed separately. Thus, the evidence base may not reflect how minimization, conversion, or withdrawal strategies affect outcomes in patients who have experienced CNI-related adverse events.
Third, the studies included in this review disproportionately examined CsA rather than TAC. Contemporary immunosuppressive practice, however, favors use of TAC over CsA. Therapeutic effectiveness, as well as toxicity, vary between the two types of CNIs. Our overall findings were generally consistent with the results of subgroup analyses of studies using CsA but were less similar to studies that administered TAC. However, most of the outcomes we focused on throughout this review, including acute rejection, graft loss, and risk of infection, may not be expected to vary substantially between TAC and CsA. Other outcomes, such as renal function, may be more sensitive in the different therapies. Perhaps the most important outcome that we might expect to vary between CsA and TAC regimens is toxicity. However, data on nephrotoxicity and neurotoxicity were rarely reported in the studies we reviewed. It is therefore unclear how the results of studies on alternative CsA regimens apply to regimens using TAC.
Fourth, minimization regimens varied widely in selection of low-dose target levels. Standard definitions for low-dose targets have not been codified, and the evidence base does not indicate optimal levels for reducing CsA or TAC exposure. Similarly, achievement of low-dose CNI target levels for minimization regimens was poorly and inconsistently reported and varied across studies. Moreover, levels that were considered “low” when some studies were conducted may now be considered “standard,” so the evidence base may not fully reflect current patterns of CNI use.
Finally, it is important to note that we examined only immunosuppression for renal transplant recipients. The results of these studies may not apply to CNI therapy for patients with liver, pancreas, other solid organ transplants, or to patients who receive sequential or combination organ transplants.
Implications for Clinical and Policy Decisionmaking
The evidence base examined in this systematic review has important implications for clinicians involved in the care of renal transplant recipients, most notably transplant surgeons, nephrologists, pharmacists, nurses, and infectious disease specialists. To reduce the risk of CNI-associated toxicity and adverse events, treatment with low-dose CsA or TAC in combination with MMF, MPS, or mTOR inhibitors may provide sufficient immunosuppressive therapy to reduce risk of acute rejection and opportunistic infection while enabling improved renal function. Conversion or withdrawal strategies may also help improve renal function but can result in higher risk for acute rejection. The potential benefits and risks of de novo CNI avoidance are unclear.
The evidence base examined in this report includes a disproportionate number of studies of CsA and relatively few studies of TAC, although TAC is utilized more frequently than CsA in the United States. Clinicians should recognize that the findings discussed throughout this report might characterize more precisely the effects of CsA rather than TAC. However, we do not suggest that CsA should be preferred over TAC or that current use of TAC is inappropriate. Instead, we wish to highlight the need for additional research to identify optimal strategies for administering and managing CNI immunosuppression.
Therapeutic drug monitoring of adjunctive therapies such as MMF or mTOR inhibitors were not evaluated in this review. There is an emerging view that mycophenolic acid (MPA) exposure rather than CNI exposure better predicts clinical outcomes following renal transplantation. However, whether TDM should be performed for MPA is a matter of debate.138,139 Prospective, randomized trials performing MPA TDM have shown conflicting results.140,141 However, a recent study by Abdi employing a time-to-event model demonstrated that acute rejection, graft loss and death following renal transplantation was significantly associated with MPA and not CNI exposure.142
We did review studies comparing trough level monitoring to C2 monitoring for CsA. However, CsA is less commonly used in clinical practice today. There is still a question of the best timepoint or timepoints for monitoring TAC, as TAC trough levels are not well correlated with total exposure.143
Adjunctive therapies such as MMF or mTOR inhibitors were not evaluated independently from CNI utilization in this review. Although these currently used therapeutic agents were not compared head-to-head, regimens that paired each with low-dose CNI regimens were associated with good patient outcomes. We also did not perform independent assessments of induction therapy. Our analyses found the evidence was insufficient to support strong conclusions about induction agents, and the results do not indicate whether specific induction strategies, when used with low-dose CNIs, yield greater or lesser benefits. Lack of induction was even associated with positive patient outcomes. However, it is important to note that most of the immunosuppressive regimens we evaluated in this report included multiple therapeutic agents. Distinguishing the effects of individual strategies within complex multicomponent treatments is a significant challenge for clinicians and researchers.144
Carefully selecting the optimal time for implementing an alternative immunosuppressive strategy may be important for achieving positive patient outcomes. Minimization and avoidance regimens can be planned in advance for the care of new renal transplant recipients. Conversion and withdrawal regimens, on the other hand, are most frequently initiated in response to adverse events in patients receiving CNIs, but they can also be planned. Early minimization appears to be more beneficial than later minimization and is also associated, somewhat surprisingly, with lower risk of acute rejection compared to standard regimens. Conversion and withdrawal may confer some benefits but are also associated with increased risks. Avoidance strategies have not been widely studied yet. Clinicians treating new renal transplant recipients may therefore find value in deciding on a long-term approach early in the treatment process.
Clinicians must carefully weigh many therapeutic options when evaluating which immunosuppressive regimen to implement and must consider each patient's immunologic risk and comorbid medical conditions. The studies assessed in this review were conducted primarily in low-risk populations. When clinicians treat higher-risk patients they should consider how the balance of potential benefits and risks evaluated in our evidence tables may differ for those populations. However, it is important for clinicians to understand how CNI-based immunosuppression and current alternative strategies affect low- or average-risk patients, since the latter compose a majority of the renal transplant population. Studies in relatively healthier patients may also be necessary for establishing benchmarks that can be used when evaluating immunosuppressive therapy in higher-risk populations.
For all of the results described in this review, clinicians must evaluate the clinical significance of our findings. For example, renal function was often identified as an outcome that improved after implementation of an alternative regimen, but the absolute change in eGFR or creatinine clearance was sometimes of limited clinical relevance. Clinicians should consider how the effect sizes we described for renal function and other outcomes may translate into patient well-being.
Clinicians must also consider patient adherence to medication regimens when evaluating therapeutic options. A recent survey of 60 renal transplant patients found that low adherence was associated with poorer renal function, and the most frequently cited reason for nonadherence was patient forgetfulness.145 Clinicians should discuss with patients and their families potential barriers to adherence, including unwanted drug side effects, interactions between immunosuppressive drugs and other medications, complexity of medication regimens, and cost.
Medication costs are an important consideration for patients, clinicians, health insurers, and policymakers. While Medicare often provides 80 percent of coverage of immunosuppression for up to 3 years following renal transplantation, the burden of paying for immunosuppression in the longer term may fall disproportionately on patients and their families if Medicare entitlement was based solely on end-stage renal disease. CsA, TAC, MMF, MPS, and SRL are available in generic formulations, but belatacept is not.
Another important consideration is the growing body of research on pharmacogenetic testing. Development of validated biomarkers may help clinicians better individualize immunosuppressive regimens and potentially prolong patient and graft survival by minimizing long-term drug toxicity.
Monitoring therapeutic drug levels is a critical component of CNI management. Although the evidence base for KQ 1 is limited, the ease of use of immunoassays may outweigh any potential improvements in analytic validity resulting from the use of HPLC methodologies. Similarly, the evidence base for KQ2 was limited, and preferences for C0 or C2 monitoring of CsA may be most influenced by practical considerations, such as patient convenience. C2 level monitoring is less practical because it needs to be measured within 15 minutes of the 2-hour target to avoid large shifts in concentration during the absorption phase, while C0 measurement can be done within a 10- to 14-hour window as it represents the elimination phase.12,13 Finally, other factors also influence drug dose, such as eating habits and use of certain over-the-counter medications or herbal supplements.
Limitations of the Comparative Effectiveness Review Process
Due to the broad scope of the KQs, the many potentially relevant studies, and the time and resources available to complete the review, we confined our final analyses to RCTs for KQ 3. Many observational studies have been published that address this topic, and by excluding non-RCTs we may have omitted important findings, especially those related to adverse events. However, our systematic searches did not exclude observational studies; thus, we reviewed their characteristics and found they were generally small, did not have extended followup periods, and their reported outcomes were represented adequately by the available RCTs.
We also limited our review to studies published in English, which could have excluded important articles published in other languages. However, we included 22 studies representing 1,939 subjects from countries outside North America, Western Europe, and Australia, including studies conducted in Asia, the Middle East, and South America.
Another limitation of the systematic review and meta-analytic process is that combining multiple studies into broad analytic categories can mask important sources of heterogeneity. For example, studies that used an mTOR inhibitor were frequently combined, whether they used SRL or EVR, because their pharmacologic mechanisms are similar. Studies also varied in whether and how they excluded higher-risk patients, in how they measured renal function, and in the selection of medication dosing and therapeutic targets. We performed numerous subgroup analyses to address important types of study variation and conducted sensitivity analyses to explore heterogeneity. However, we could not explore every potentially important source of variance given the complexity of immunosuppression management in transplant recipients.
Limitations of the Evidence Base
Very few studies addressed KQs 1 and 2. They were highly complex and heterogeneous, and we were not able to conduct meta-analysis given these limitations. Only one RCT examined clinical outcomes of different monitoring methods. Most of the studies were not randomized and used pre-post study designs. While many of the studies examining analytical accuracy consider HPLC as the gold standard, most of these studies did not use appropriate reference materials such as isotope-labeled internal standards to determine the true value of CNI concentrations. In addition, assays and methods have improved over the past 10 years, thus assays utilized in an early era may not be comparable to newer assay technologies.
We identified 88 unique RCTs that addressed KQ 3, which is a robust evidence base. However, variations in patient populations and medication regimens may limit the generalizability of individual studies as well as our meta-analyses.
Small sample size was an important limitation in many studies. Although we were able to perform meta-analyses of many key outcomes, small studies can yield imprecise statistical estimates. Sample size was an especially notable limitation in our evaluation of low-frequency events, such as patient death and graft loss. As a result, the most robust findings associated with alternative CNI regimens are based on changes in renal function and risk of acute rejection, while other important outcomes are not well addressed. Moreover, measures of improvement in renal function that achieve statistical significance may not indicate clinically meaningful differences in patient care. In addition, for the outcome of BPAR, the studies we reviewed varied in their use of biopsy testing, with some studies implementing routine “per-protocol” biopsies, while other studies used biopsy primarily to confirm suspected cases of organ rejection. These different strategies may have introduced variation in the study data we evaluated.
Similarly, incomplete and inconsistent reporting of adverse events limited our ability to adequately assess the potential impact of alternative CNI strategies on patient harms. This was particularly important for CNI-related nephrotoxicity and chronic allograft dysfunction, which were not assessed systematically in this review because too few studies reported comparable data for these outcomes. Infections were also reported inconsistently, and in many comparisons we lacked sufficient data to support conclusions about the effect of alternative CNI strategies on infection rates. This is a major limitation of the evidence base because infection risk is a critical factor that clinicians must consider when managing immunosuppressive regimens.
Another major limitation is the short followup period reported in most studies. We used 1-year outcome data whenever possible in our review because that was the time period reported most consistently. Incidence of major adverse outcomes (such as acute rejection or graft loss) within 1 year also provides the most direct evidence on the effects of alternative regimens, since events occurring relatively soon after implementation of a new approach are more likely to be associated with that approach, while events that emerge later may be attributed to other changes in the patient's management or morbidity. Nevertheless, longer-term outcomes are important to patients and clinicians and provide important insight into the effect of CNI management strategies. Outcome measures beyond 1 year can also inform clinicians about the sustainability of alternative strategies or identify unforeseen risks. However, very few studies examined long-term results.
Patient adherence to prescribed CNI regimens is another important factor that limits our findings. Measures of adherence were not consistently reported, and failure of patients to remain on CNI regimens may account for poorer outcomes or limited clinical improvement. Similarly, imperfect fidelity to monitoring protocols (e.g., variation in when clinical staff actually collect samples for laboratory testing) was an inherent limitation of many RCTs. Another limitation is the potential imprecision in laboratory results, between and within labs, which may affect the validity of individual study results.
The disproportionate number of studies that used CsA rather than TAC may also limit the generalizability of the evidence base to current immunosuppressive practice. Finally, we again emphasize that most of the studies we reviewed were conducted in low- or average-risk populations and were implemented as planned strategies rather than therapeutic responses to patients who exhibited CNI-related adverse events. The effects of alternative strategies on high-risk patients remain largely unknown.
Research Gaps
For KQs 1 and 2, insufficient evidence directly compares analytical and clinical outcomes between different monitoring techniques. Current studies also do not adequately consider the resources and costs associated with different monitoring methods, lack of standardization in laboratory procedures, patient and clinician preferences, and availability of specific methods, such as HPLC. In addition, the followup periods reported in most studies are not long enough for assessing many relevant outcomes. Comparisons of monitoring techniques are particularly important because long-term overexposure to immunosuppression could potentially contribute to post-transplant complications such as infection, malignancy, cardiovascular disease, diabetes, and related allograft changes (formerly known as chronic allograft nephropathy).
Although our review identified many studies examining KQ 3, significant knowledge gaps emerged. Insufficient evidence addresses the management of immunosuppression in high-risk populations, including elderly renal transplant patients, African-Americans, those of lower socioeconomic status, patients who have undergone retransplantation, and those living with significant comorbidities, including HIV.
We also found the evidence base lacks many studies that compare low-dose TAC to standard-dose TAC, in the context of various adjunctive therapies and induction agents. It is unclear how the evidence we reviewed, based largely on studies of CsA, should be interpreted compared to current practices that favor TAC. Our analyses detected heterogeneity in our findings that may be attributed partly to variation in the immunosuppressive regimens that were evaluated. Moreover, subgroup analysis found that the outcomes reported in studies using TAC tended to vary more from our overall findings compared to the studies that administered CsA.
Similarly, the evidence on the role of induction agents is insufficient and inconsistent, particularly in low-dose CNI regimens and avoidance strategies. While many studies have examined induction therapy independently, data on their effectiveness within these alternative regimens are missing. Also, too few studies directly compare alternative regimens to each other, as most studies instead compare alternative regimens to standard, full-dose CNI therapy. We also did not find sufficient evidence to adequately evaluate belatacept therapy.
The current evidence base does not measure and report important patient-centered outcomes, including preferences for different medications, adherence to immunosuppressive therapy, and side effects of CNIs and other immunosuppressants. Many other outcomes are not reported or are described inconsistently, such as CNI-associated toxicity, graft dysfunction, and infections. Finally, data from longer-term followup are lacking. Almost no studies have assessed the effectiveness, harms, or levels of patient adherence associated with alternative regimens after 5, 10, or 15 years.
Conclusions
We identified 105 studies published between 1994 and 2015 that addressed management of CNI immunosuppression and met our inclusion criteria. Eleven studies examined technologies used to monitor therapeutic drug levels in patients on CNI therapy. Six studies compared monitoring of CsA levels at trough compared with 2 hours after administration. The remaining 88 trials evaluated a variety of alternative strategies to full-dose CNI therapy.
The findings of the studies addressing analytic validity suggest that chromatographic techniques (e.g., HPLC, LC-MS/MS) more accurately measure CNI concentration levels than commonly used immunoassays. However, it is unclear whether the differences identified in these studies are clinically meaningful such that they would change clinical management or affect patient outcomes. In addition, these techniques are typically time-consuming, labor-intensive, and less standardized, and thus their results may be more provider-dependent.
For KQ 2, the current state of the evidence does not suggest any clear clinical benefit of C2 monitoring over C0; however, low strength of evidence suggests that risk of BPAR is similar between new renal transplants monitored at C0 and those monitored at C2. One RCT indicated that C2 monitoring led to a significantly higher CsA mean cumulative dose increase compared to C0 monitoring in recent transplant recipients. Low-strength evidence from this same study also indicated that significantly more patients in the C2 group than in the C0 group experienced tremors. In contrast, another small RCT indicated that C2 monitoring led to significantly more CsA dose reductions than C0 monitoring among stable renal recipients. Whether this reflects actual differences between recent and stable renal recipients, or simply reflects the fact that each is based on a single study, is uncertain. Future studies might overturn these conclusions.
For KQ 3, high-strength evidence suggests that immunosuppression with low-dose CsA or TAC, in combination with mycophenolic acid formulations or mTOR inhibitors, results in lower risk of acute rejection and graft loss and improved renal function. The benefits of minimization strategies may be most significant when initiated from the time of transplant or shortly thereafter. Use of induction agents is not strongly associated with improved outcomes in minimization regimens, but additional research is necessary to clarify the effect of induction therapy. Conversion from a CNI to an mTOR inhibitor is associated with modest improvement in renal function. Conversion is also associated with a slightly lower risk of CMV, but the evidence was inconclusive for other opportunistic infections. Withdrawal of a CNI is not associated with improvements in renal function and may increase the risk of acute rejection. Avoidance strategies employing de novo use of SRL, EVR, or belatacept have not been studied widely, and further research is necessary to identify potential benefits or harms of CNI avoidance.
These regimens have been studied primarily in low-risk populations, and the evidence base therefore can directly inform care of most renal transplant recipients. However, further research is necessary to generate evidence of optimal immunosuppression strategies for high-risk patients. More comprehensive and consistent reporting of clinically important and patient-centered outcomes is needed, including measures of renal function, CNI-related toxicity, side effects, and patient adherence to immunosuppressive regimens.
- Discussion - Calcineurin Inhibitors for Renal TransplantDiscussion - Calcineurin Inhibitors for Renal Transplant
- Peer Reviewers - Partial Breast Irradiation for Breast CancerPeer Reviewers - Partial Breast Irradiation for Breast Cancer
Your browsing activity is empty.
Activity recording is turned off.
See more...