NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Viswanathan M, Kraschnewski J, Nishikawa B, et al. Outcomes of Community Health Worker Interventions. Rockville (MD): Agency for Healthcare Research and Quality (US); 2009 Jun. (Evidence Reports/Technology Assessments, No. 181.)
This publication is provided for historical reference only and the information may be out of date.
In this chapter, we document the procedures that the RTI International–University of North Carolina Evidence-based Practice Center (RTI–UNC EPC) used to develop this comprehensive evidence report on community health workers (CHWs). The team was led by a senior health services researcher (Meera Viswanathan, Ph.D., Study Director), and included a physician trained in internal medicine and pediatrics (Dan Jonas, M.D., M.P.H.), a general internist (Jennifer Kraschnewski, M.D.), a preventive medicine physician (Brett Nishikawa, M.D.), an economist (Amanda Honeycutt, Ph.D.), and two EPC staff members, Laura Morgan, M.A., and Patricia Thieda, M.A.
We describe our inclusion and exclusion criteria, search and retrieval process, and methods of abstracting relevant information from the eligible articles to generate evidence tables. We also discuss our criteria for grading the quality of individual articles and for rating the strength of the evidence as a whole.
Literature Review Methods
Inclusion and Exclusion Criteria
Our inclusion and exclusion criteria are documented in Table 1. As noted in Chapter 1, this systematic review focuses on characteristics, outcomes, cost-effectiveness, and training of CHWs. We restricted our searches to the United States so that we could have data relevant to domestic health care concerns. We also restricted our searches to studies published in 1980 or thereafter to ensure that results had relevance to current practice.
We excluded studies that (1) were published in languages other than English (given the available time and resources); (2) did not report information pertinent to the key clinical questions; (3) had fewer than 40 subjects for randomized controlled trials (RCTs) or nonrandomized cohorts with comparisons; and (4) were not original studies.
A key criterion for inclusion was the requirement that the effect of the CHW had to be abstractable. As a result of this criterion, our review is limited to studies for which the effect of the CHW intervention can be isolated; we excluded 38 studies in which the outcome of the intervention could not be attributed to the CHW. These studies often compared usual care to a combination of interventions that may have included CHWs as one of several components and did not distinguish between the effect of the CHW and other components. Another key criterion was the requirement that the intervention included CHWs. As a result, we excluded studies that relied on peer counselors (13 studies).
For key questions (KQs) 1, 2, and 3, we required that the CHW intervention be compared with an alternative; we excluded 70 studies without comparison arms. For KQ 4, we required that the description of training for CHWs be supported by pre- and post-training evaluation data; we excluded 34 studies without such data.
Literature Search and Retrieval Process
Databases. We searched three electronic databases—MEDLINE®, Cochrane Collaboration resources, and Cumulative Index to Nursing and Allied Health Literature (CINAHL). We also hand-searched the reference lists of relevant articles to make sure that we did not miss any relevant studies. We consulted with our Technical Expert Panel (TEP) about any studies or trials that were currently under way or that had not yet been published.
Search terms. Based on the inclusion/exclusion criteria above, we generated a list of Medical Subject Heading (MeSH) search terms (Table 2 and Appendix A). Our TEP also reviewed these terms to ensure that we were not missing any critical areas, and this list represents our collective decisions as to the MeSH terms used for all MEDLINE searches.
Our initial searches in MEDLINE produced 640 unduplicated records. Searches in other databases (CINAHL, Cochrane, and Cochrane Clinical Trials Registry) yielded 169 new records (unduplicated across all databases) for a total of 809 records. We conducted update searches in all databases in November, 2008 and supplemented electronic searches with manual searches of reference lists. In addition, we received recommendations for studies of interest from the TEP and conducted a supplemental search on patient navigators after peer review. In all, we identified 1,076 unduplicated references from all searches (Table 3).
Figure 2 presents the yield and results from our searches, which we conducted from April through November 2008. Beginning with a yield of 1,076 articles, we retained 89 articles that we determined were relevant to address our KQs and met our inclusion/exclusion criteria (Figure 2). We reviewed titles and abstracts of the articles against the basic inclusion criteria above; we retained relevant articles and used them as appropriate in the discussion in Chapter 4.
Article selection process. Once we had identified articles through the electronic database searches, review articles, and reference lists, we examined abstracts of articles to determine whether studies met our criteria. Each abstract was independently, dually reviewed for inclusion or exclusion, using an Abstract Review Form (Appendix B). If one reviewer concluded that the article should be included in the review, we retained it.
Of this entire group of 1,076 citations, 590 required full review. For the full article review, one team member read each article and decided whether it met our inclusion criteria, using a Full-Text Inclusion/Exclusion Form (Appendix B). Reasons for article exclusion are listed in Appendix D.
Literature Synthesis
Development of Evidence Tables and Data Abstraction Process
The team jointly developed the evidence tables. We designed the tables to provide sufficient information to enable readers to understand the studies and to determine their quality; we gave particular emphasis to essential information related to our KQs. We based the format of our evidence tables on successful designs that we have used for prior systematic reviews.
We trained abstractors by having them abstract several articles into evidence tables and then reconvening as a group to discuss the utility of the table design. The abstractors repeated this process through several iterations until they decided that the tables included the appropriate categories for gathering the information contained in the articles.
Four members of the team (Jennifer Kraschnewksi, Brett Nishikawa, Laura Morgan, and Patricia Thieda) shared the task of initially entering information into the evidence tables. Authors of individual sections reviewed the articles and edited all initial table entries for accuracy, completeness, and consistency. Abstractors reconciled all disagreements concerning the information reported in the evidence tables. The full research team met regularly during the article abstraction period and discussed global issues related to the data abstraction process.
The final evidence tables are presented in their entirety in Appendix C. Studies are presented in the evidence tables alphabetically by the last name of the first author. A list of abbreviations and acronyms used in the tables appears at the beginning of that appendix.
Quality Rating of Individual Studies
Quality rating forms for RCTs have been validated and in use for several years; a similarly well-validated form for observational studies does not exist. RTI has been developing a form to rate observational studies.54 This form, which can be used to rate the quality of a variety of observational studies, was based on a review of more than 90 AHRQ systematic reviews that included observational studies; we supplemented this review with other key articles identifying domains and scales.55,56 We structured the resultant form largely on the basis of the domains and subdomains suggested by Deeks and colleagues;55 we then adapted it for use in this systematic review (Appendix B).
The form currently includes review of nine key domains for observational studies: background, sample selection, specification of exposure, specification of outcome, soundness of information, followup, analysis comparability, analysis of outcome, and interpretation. An additional domain for RCTs is the quality of randomization. We used these dimensions of quality to assess the overall quality of the study. We did not attempt to construct a quantitative scale for quality. Previous scales have been critiqued for their lack of inter-rater reliability. An additional concern is scales do not account for a single flaw that may substantially bias results, despite meeting standards for all other aspects of study quality. Each study was dually evaluated for quality; abstractors reconciled all disagreements.
Strength of Available Evidence
We evaluated the strength of evidence based on the AHRQ Comparative Effectiveness Methods Guide.57 The strength of evidence for each outcome incorporates risk of bias, consistency, directness, precision, and the presence of other modifying factors. As described in Owens et al., the evaluation of risk of bias includes assessment of study design and aggregate quality of studies.57 We judged good quality studies with strong designs to result in evidence with low risk of bias. We graded evidence as consistent when effect sizes across studies were in the same direction and had a narrow range. When the evidence linked the interventions directly to health outcomes, we graded the evidence as being direct. We graded evidence as being precise when results had low degree of uncertainty. When considering the effect of confounders, we evaluated whether the degree of intensity of interventions in both arms could have explained the effects (or absence of effects); additionally we considered whether other sources of effect modification or confounding had been accounted for. We dually evaluated the overall strength of evidence for each outcome based on a qualitative assessment of strength of evidence for each domain and reconciled all disagreements. The levels of strength of evidence are shown in Table 4.
Applicability of the Evidence
We evaluated the applicability of the evidence based on a qualitative assessment of the population, intensity, or quality of treatment, choice of the comparator, outcomes, and timing of followup. We based our parameters for evaluation on guidance provided by AHRQ’s Comparative Effectiveness Methods Guide.58 Specifically, we consider whether enrolled populations differ from target populations and how this might affect risk of benefits or harms, whether studied interventions compare to those in routine use and how this might affect risk of benefits or harms, whether comparators reflect best alternative treatment and how this may influence treatment effect size, whether measured outcomes are known to reflect the most important clinical benefits and harms, and whether followup is sufficient to detect clinically important benefits.
External Peer Review
AHRQ’s Scientific Resource Center requested review of this report from a wide array of outside experts. We received three external reviews and revised the report as appropriate.
- Methods - Outcomes of Community Health Worker InterventionsMethods - Outcomes of Community Health Worker Interventions
- Future Research - Systematic Review of the Literature Regarding the Diagnosis of...Future Research - Systematic Review of the Literature Regarding the Diagnosis of Sleep Apnea
- Appendix B Data Extraction Form for Diagnostic Studies - Systematic Review of th...Appendix B Data Extraction Form for Diagnostic Studies - Systematic Review of the Literature Regarding the Diagnosis of Sleep Apnea
Your browsing activity is empty.
Activity recording is turned off.
See more...