NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Viswanathan M, Kahwati LC, Golin CE, et al. Medication Therapy Management Interventions in Outpatient Settings [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2014 Nov. (Comparative Effectiveness Reviews, No. 138.)
The methods for this comparative effectiveness review (CER) on medication therapy management (MTM) follow the methods suggested in the Agency for Healthcare Research and Quality (AHRQ) “Methods Guide for Effectiveness and Comparative Effectiveness Reviews” (available at http://www.effectivehealthcare.ahrq.gov/methodsguide.cfm). We specified methods and analyses a priori in a protocol posted on the AHRQ website,25 following a standard framework for specifying population, interventions, comparators, outcomes, and settings (PICOTS). The main sections in this chapter reflect the elements of the protocol established for the CER; certain methods map to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist.26 We describe below instances in which our a priori methods required further specification during the project.
Topic Refinement and Review Protocol
The topic of this report and preliminary Key Questions (KQs) arose through a nomination from the Pharmacy Quality Alliance. Key Informants representing several clinical and scientific disciplines provided input on the initial KQs; we revised them as needed. An initial draft of the revised KQs was posted for public comment from March 6, 2013, through April 2, 2013, on the AHRQ Effective Health Care program Web site. We received comments from 23 professional organizations and individuals and further revised KQs as appropriate. Specifically, we
- added a new KQ (KQ 1) to describe the components and implementation features of MTM interventions,
- included additional intermediate outcomes in KQ 2,
- reworded KQ 3 to include MTM components,
- specified MTM components and implementation features for KQ 3 in the PICOTS,
- specified additional patient characteristics for KQ 4 in the PICOTS, and
- rephrased KQ 5 to make the response conditional on identifying whether any harms of MTM exist.
Literature Search and Identification Strategy
Search Strategy
To identify articles relevant to each KQ, we began with a focused MEDLINE® search for MTM interventions using a combination of medical subject headings (MeSH®) and title and abstract keywords and limiting the search to English-language and human-only studies (Table 3) (inception through January 9, 2014). We also searched the Cochrane Library (inception through January 10, 2014).and the International Pharmaceutical Abstracts database (inception through January 10, 2014) using analogous search terms.(Appendix A). We selected these databases based on preliminary searches and consultation with content experts. We conducted quality checks to ensure that the searches identified known studies (i.e., studies identified during topic nomination and refinement). Based on these quality checks, we revised and ran additional searches (specifically, drug therapy management, drug therapy problem, and medications management) to avoid missing articles that might prove eligible for this CER.
In addition, we searched the gray literature for unpublished studies relevant to this review and included studies that met all the inclusion criteria and contained enough methodological information to assess risk of bias. Specifically, sources of gray literature included ClinicalTrials.gov, the World Health Organization's International Clinical Trials Registry Platform, Health Services Research Projects in Progress (HSRProj), the National Institutes of Health Research Portfolio Online Reporting Tools, the Database of Promoting Health Effectiveness Reviews, the New York Academy of Medicine Grey Literature Report, and CMS.gov. AHRQ's Scientific Resource Center managed the process of submitting requests for scientific information packets, which contain information about MTM programs and services of interest from relevant providers.
We reviewed our search strategy with an independent information specialist and the Technical Expert Panel and supplemented it according to their recommendations. In addition, to avoid retrieval bias, we manually searched the reference lists of landmark studies and background articles on this topic to identify any relevant citations that our electronic searches might have missed.
We conducted an updated literature search (of the same databases searched initially) concurrent with the peer review process. We also investigated any literature the peer reviewers or the public suggest and, if appropriate, incorporated additional studies into the final review. The appropriateness of those studies was determined using the methods and criteria described above.
We planned to include pooled estimates of effect or other relevant results from systematic reviews hat meet our inclusion/exclusion criteria and to evaluate the quality of included systematic reviews using the AMSTAR tool.27 If appropriate and feasible, we had planned to update the results of these reviews quantitatively or qualitatively. We also planned to review reference lists of systematic reviews that used exclusion and exclusion criteria that differed from ours to ensure that we include all relevant studies.
Inclusion/Exclusion Criteria
We specified our inclusion and exclusion criteria based on the population, intervention, outcome, timing, and settings identified through the topic refinement exercise. We excluded studies published in languages other than English. We excluded study designs without control groups to ensure that our pool of included studies can inform the causal link between the intervention and outcomes.
In conducting the review, we found that we needed to define the intervention with greater specificity than originally thought so that we could include MTM interventions but exclude disease management interventions. Specifically, we required that included studies had conducted a comprehensive, rather than condition-specific, medication review, as required in our PICOTS criteria. Although we had not planned to contact study authors routinely for additional information, the lack of clarity regarding intervention elements in numerous published studies necessitated our contacting authors. For these studies, we based our decisions on inclusion or exclusion based on email communication. (Appendix D specifies the studies or publications for which we sought such information but received no response from authors as of the time the draft report was submitted for peer review.)
Study Selection
Pairs of trained members of the research team reviewed each title and abstract independently against our inclusion/exclusion criteria. Studies marked for possible inclusion by either reviewer underwent a full-text review. For studies that lack adequate information to determine inclusion or exclusion, we retrieved the full text and then made the determination.
We retrieved and reviewed the full text of all included titles during the title/abstract review phase. Two trained members of the team independently reviewed each full-text article for inclusion or exclusion based on the eligibility criteria specified in Table 4. If both reviewers agreed that a study did not meet the eligibility criteria, they excluded the study. If the reviewers disagreed, they discussed differences to achieve a consensus. If they could not reach consensus, a third senior member of the review team resolved the conflict. We tracked all results in an EndNote® (Thomson Reuters, New York, NY) database. We recorded the reason that each excluded full-text publication did not satisfy the eligibility criteria. Appendix C lists all studies excluded at this stage together with the reason(s) for exclusion.
Data Extraction
For studies that met our inclusion criteria, we abstracted relevant information into evidence tables (Appendix D). We piloted our approach with a sample of studies and revised the form thereafter. We designed data abstraction forms to gather pertinent information from each article, including the characteristics of the study populations, interventions, comparators, outcomes, timing, settings, study designs, methods, and results. A second member of the team reviewed all data abstractions for completeness and accuracy. (Relevant forms can be found in Appendix B.)
Assessment of Risk of Bias of Individual Studies
To assess the risk of bias of individual studies, we used predefined criteria developed by AHRQ.28 For randomized controlled trials (RCTs), we relied on the risk-of-bias tool developed by the Cochrane Collaboration.29 We assessed the risk of bias of observational studies using an item bank developed by RTI International.30
In general terms, results of a study with low risk of bias are considered valid. Studies marked low risk of bias did not have any major flaws in design or execution. A study with medium risk of bias is susceptible to some bias but probably not sufficient to invalidate its results. A study with high risk of bias has significant methodological flaws (e.g., stemming from serious errors in design or analysis) that may invalidate its results. Primary concerns for our review included selection bias, confounding, performance bias, detection bias, and attrition bias. Very high attrition rates, particularly when coupled with a failure to control for confounding or conduct intention-to-treat analyses, resulted in a rating of high risk of bias for trials and prospective cohort studies. Likewise, we rated studies with an inherently high risk of confounding in design (e.g., observational studies comparing refusers versus acceptors of MTM interventions) as high risk of bias if they failed to address confounding through design (e.g., matching) or analysis (e.g., regression). Specifically, we evaluated trials on the adequacy of randomization, allocation concealment, similarity of groups at baseline, masking, attrition, whether intention-to-treat analysis was used, method of handling dropouts and missing data, validity and reliability of outcome measures, and treatment fidelity. For observational studies, we did not assess adequacy of randomization or allocation concealment but did assess for confounding. We also evaluated trials for confounding due to randomization failure through biased selection or attrition. In other words, we evaluated trials with potential randomization failure for the same risks of bias as observational studies.
We excluded studies that we deemed at high risk of bias from our main data synthesis and main analyses. We included them for sensitivity analyses; in cases when we had no other available or credible evidence, we included in the report a brief synopsis of studies assessed as high risk of bias.
Data Synthesis
When we found three or more similar studies for a comparison of interest, we conducted meta-analysis of the data from those studies using Comprehensive Meta-Analysis software. For all analyses, we used random-effects models to estimate pooled or comparative effects. To determine whether quantitative analyses were appropriate, we assessed the clinical and methodological heterogeneity of the studies under consideration following established guidance;31 that is, we qualitatively assessed the PICOTS of the included studies, looking for similarities and differences. When we conducted quantitative syntheses (i.e., meta-analysis), we assessed statistical heterogeneity in effects between studies by calculating the chi-squared statistic and the I2 statistic (the proportion of variation in study estimates attributable to heterogeneity). The importance of the observed value of I2 depends on the magnitude and direction of effects and on the strength of evidence for heterogeneity (e.g., the p-value from the chi-squared test or a confidence interval for I2). Where relevant, we examined potential sources of heterogeneity using sensitivity analysis.
When quantitative analyses were not appropriate (e.g., because of heterogeneity, insufficient numbers of similar studies, or insufficiency or variation in outcome reporting), we synthesized the data qualitatively. Whenever possible, we computed confidence intervals for individual outcomes.
Numerous articles did not provide complete information about findings (e.g., 95 percent confidence intervals; statistical significance values, or between-group data). In many cases, therefore, we had to calculate odds ratios, mean differences, or standardized mean differences, the relevant 95 percent confidence intervals, and p-values. In all such cases in which we calculated data, we specify this in the Results chapter; information not specifically called out as “calculated” is taken from the original articles.
Grading Strength of Evidence for Individual Comparisons and Outcomes
We graded the strength of evidence based on the guidance established for the AHRQ Evidence-based Practice Center program.32 Developed to grade the overall strength of a body of evidence, this approach incorporates four key domains: study limitations (includes study design and aggregate quality), consistency (similar magnitude and direction of effect), directness (evidence links interventions directly to outcome of interest for the review), and precision of the evidence (degree of certainty surrounding an effect estimate based on sample size and number of events). In addition, the evidence may be rated as lower strength for bodies of evidence with suspected reporting bias from publication, selective outcome reporting, or selective analysis reporting. Regardless of the specific risk of bias of observational studies, this approach to grading the evidence assigns observational studies a grade of high study limitations, which then leads to low strength of evidence. The strength of evidence from observational studies can be rated as higher for observational studies for scenarios such as a strong dose-response association, plausible confounding that would decrease the observed effect, and a high strength of association (magnitude of effect). We evaluated optimal information size criteria to make judgments about precision based on guidance from Guyatt and colleagues33 and based our grades on low or medium risk-of-bias RCTs or observational studies unless none were available.
Our approach is consistent with current strength of evidence guidance developed by GRADE and AHRQ EPCs. The GRADE guidance explicitly discourages the inclusion and averaging of risk of bias across studies with different underlying risk-of-bias criteria. Rather, it suggests considering including only studies with a lower risk of bias.34 Likewise, the AHRQ EPC guidance notes that reviewers may focus “strength of evidence on the subset of studies that provide the least limited, most direct, and most reliable evidence for an outcome or comparison, after analysis of all the evidence.”32, p. 20
Table 5 describes the grades of evidence that can be assigned.35 Grades reflect the strength of the body of evidence to answer the KQs on the comparative effectiveness, efficacy, and harms of the interventions examined in this review. Two reviewers assessed each domain for each key outcome resolved any differences by consensus discussion or referral to a third, senior member of the team. We graded the strength of evidence for the outcomes deemed to be of greatest importance to decisionmakers and those commonly reported in the literature; we did not grade the strength of evidence for KQ 1 (on components and features of MTM services). The grades described in Table 5 describe the state of evidence (which may demonstrate benefit, harm, or no effect) and the confidence in the stability of that state. An insufficient grade is not a statement about lack of efficacy or effectiveness; rather it is a statement about the lack of convincing evidence on benefit, harm, or lack of effect.
Assessing Applicability
We assessed applicability of the evidence following guidance from the “Methods Guide for Effectiveness and Comparative Effectiveness Reviews.”36 We used the PICOTS framework to explore factors that affect applicability. Some factors identified a priori that may limit the applicability of evidence include the following: age and health status of enrolled populations, health insurance coverage and access to health care, and complexity and intensity of the MTM intervention.
Peer Review and Public Commentary
This report received extensive external peer review and was posted for public comment December 2, 2013, to January 6, 2014. Comments were received from five peer reviewers and four TEP members. In addition, we received public comments from eight individuals and professional organizations. We addressed all comments in the final report, making revisions as needed; a disposition of comments report will be publicly posted 3 months after release of the final report.
- Methods - Medication Therapy Management Interventions in Outpatient SettingsMethods - Medication Therapy Management Interventions in Outpatient Settings
- Results - Medication Therapy Management Interventions in Outpatient SettingsResults - Medication Therapy Management Interventions in Outpatient Settings
- Key Informants - Testing of CYP2C19 Variants and Platelet Reactivity for Guiding...Key Informants - Testing of CYP2C19 Variants and Platelet Reactivity for Guiding Antiplatelet Treatment
- Cdc42 GTPase complex scaffold subunit Scd2 [Schizosaccharomyces pombe]Cdc42 GTPase complex scaffold subunit Scd2 [Schizosaccharomyces pombe]gi|19114656|ref|NP_593744.1|Protein
- MRSA Data Abstraction Form Elements - Screening for Methicillin-Resistant Staphy...MRSA Data Abstraction Form Elements - Screening for Methicillin-Resistant Staphylococcus Aureus (MRSA)
Your browsing activity is empty.
Activity recording is turned off.
See more...