U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Ellis P, Robinson P, Ciliska D, et al. Diffusion and Dissemination of Evidence-based Cancer Control Interventions. Rockville (MD): Agency for Healthcare Research and Quality (US); 2003 May. (Evidence Reports/Technology Assessments, No. 79.)

  • This publication is provided for historical reference only and the information may be out of date.

This publication is provided for historical reference only and the information may be out of date.

Cover of Diffusion and Dissemination of Evidence-based Cancer Control Interventions

Diffusion and Dissemination of Evidence-based Cancer Control Interventions.

Show details

2Methodology

A multidisciplinary research team was assembled, with participation of members from the nominating organization the National Cancer Institute (NCI), the AHRQ Task Order Officer (TOO), invited technical experts, McMaster local experts, and research staff (refer to Appendix A for a list of the collaborative team).

When the original call for proposals came out in early 2001, NCI's Division of Cancer Control and Population Sciences was interested in knowing more about the types of diffusion and dissemination strategies used, the potential variation in these strategies across the cancer control continuum, and the outcomes of these diffusion and dissemination strategies (see Appendix B for the original NCI questions).

The first step during the topic assessment and refinement process was to organize a one-day video conference with the NCI partners, the TOO, invited Topic Experts, and the McMaster Team in order to define the magnitude of the topic to be addressed and to refine the preliminary research questions for this evidence report. During the video conference call it became obvious that the scope of the original proposal was too large to be suitably addressed in this project. Subsequently, it was agreed that this evidence report would focus on addressing two primary objectives: (1) to provide an overview of the cancer control interventions that are effective in promoting behavior change, and (2) to identify evidence-based strategies that have been evaluated to disseminate these cancer control interventions.

Given that the cancer control intervention literature is extensive and a large number of systematic reviews have already been undertaken, it was proposed that the first objective of this evidence report would be accomplished by conducting a review of existing systematic reviews. This review would provide a summary of the state-of-evidence for the effectiveness of cancer control interventions. The review of systematic reviews would focus on specific topics in the areas of prevention, early detection, and supportive care. Five topics were selected based on NCI's cancer control priorities. These topics are: adult smoking cessation, adult healthy diet, mammography, cervical cancer screening, and control of cancer pain.

In order to address the second objective of this evidence report, a systematic review of primary studies would be conducted. This systematic review would determine what strategies have been evaluated to disseminate cancer control interventions in the same five topic areas. The systematic review would identify the dissemination strategies used and the outcome of the dissemination efforts.

Regular teleconference calls were held with the TOO, the NCI partners, and technical experts throughout the data refinement and to extraction phase. Experts reviewed the lists of selected articles and were asked to check for inclusiveness and bring to the attention of the McMaster team any work published in peer-reviewed journals that had not been identified by the searches.

Key Questions

This consultation with NCI partners, experts, the TOO, and the McMaster team resulted in a set of refined questions that would be addressed by this evidence report. In total, 10 key questions were agreed upon, two in each of the five topic areas. The first key question in each topic area was designed to focus on the effectiveness of interventions studied to promote the uptake of the target cancer control behavior (e.g., smoking cessation). The intent of the second key question in each of the topic areas was to determine which strategies have been evaluated to disseminate effective cancer control interventions that promote the uptake of the target cancer control behavior. The 10 key questions addressed by this evidence report are detailed in Tables 2 and 3. Also included in these tables are the inclusion and exclusion criteria and electronic databases searched for each of the key questions.

Table 2. Review of Reviews: Key questions, Inclusion criteria and Databases searched.

Table

Table 2. Review of Reviews: Key questions, Inclusion criteria and Databases searched.

Table 3. Primary studies: Key questions, inclusion criteria, and databases searched.

Table

Table 3. Primary studies: Key questions, inclusion criteria, and databases searched.

General Literature Search Strategies

The development of the search strategies followed an iterative repetitive process in consultation with the McMaster Evidence-based Practice Center's librarian. Initially we chose search terms based on the MEDLINE indexing terms of several key publications. Our preliminary search strategy was tested using the “See Related” function of PubMed to ensure that the search would retrieve the key publications previously identified. The search terms were then refined using the same process. The MEDLINE search was modified to meet the specific features of CINAHL, EMBASE, and PsycINFO. The final search strategies for each database searched appear in Appendix C. No attempt was made to contact study authors for additional information due to time constraints.

Review of Systematic Reviews on the Effectiveness of Cancer Control Interventions

Literature Search

English language citations were identified from the following sources: (1) MEDLINE, the US National Library of Medicine (NLM) database; (2) PreMedline; (3) CancerLIT; (4) EMBASE, the Excerpta Medica Database; (5) PsycINFO; (6) the Cumulative Index to Nursing and Allied Health Literature (CINAHL); (7) Sociological Abstracts; (8) Cochrane Database of Systematic Reviews (CDSR); (9) references of articles and reviews identified for inclusion; and (10) the technical experts. A detailed description of these databases can be found in Table 4. The searches were performed between November 2001 and March 2002 (see Appendix C for specific search terms and dates). The strategies consisted of the keywords “meta-analysis” (systematic or quantitativ: review: or overview:) as textwords (in abstracts or titles), and topic-specific terms such as “smoking,” “smoking cessation,” “tobacco use disorder”.

Table 4. Electronic Databases Searched.

Table

Table 4. Electronic Databases Searched.

Study Selection

Systematic reviews conducted on individuals (patients, clients, consumers, or the general public) or healthcare providers published in English in peer-reviewed journals were eligible. For the purpose of this evidence report, a review was considered to be systematic if it had stated inclusion criteria for primary studies and had explicitly identified methods used in the review.

In consultation with our technical experts, it was decided that studies eligible for inclusion would include all English-language systematic reviews published from January 1990 forward. Reports focusing solely on children or adolescents were excluded. Table 2 details the inclusion and exclusion criteria for each question.

Reliability of Study Selection

Two reviewers screened the titles and abstracts generated by the searches for preliminary eligibility. The Guidelines for Citation Retrieval are presented in Appendix D. Every article identified at this stage was retrieved.

At least two independent reviewers conducted full-text screening of each article retrieved to determine whether it met the inclusion criteria for this section of the report using Forms 1 and 2 presented in Appendix E. Any discrepancies were resolved by agreement between the two reviewers or in discussion at the team meeting. The level of agreement between the observers was quantified using a kappa statistic.b Details of the screening and selection process for each of the topic-specific key questions are presented in the results section of the report.

Evaluating the Methodological Quality of Systematic Reviews

A standardized quality assessment tool developed by the Effective Public Health Practice Project23 was used for the assessment of the quality of included systematic reviews (see Appendix E, forms 3 and 4). It was developed based on the criteria of Sackett et al.24 and Oxman et al.25 and has been used extensively by members of the McMaster team. While the tool has not been published, it has been used in many reviews of systematic reviews26 and has been tested for validity and reliability. It consists of six criteria: comprehensiveness and statement of the search, description of the level of evidence (study design), quality assessment of the primary studies, integration of the results beyond listing, and adequacy of the data to support conclusions. A total score of five to six was rated as “strong”, a total score of three to four was rated as “moderate,” and a total score of two or less was rated as “weak” (Appendix E, form 4).

Data Extraction from Systematic Reviews

Data extraction forms were developed and tested (Appendix E). Two reviewers extracted data independently from each of the full reports. Any differences were resolved by discussion between the two reviewers and by referring to the information in the original report. A third person participated in the entry of the data to an electronic database. Any differences that could not be resolved by the two reviewers completing the data extraction were determined by consensus with the local research team, and, if necessary, by the technical experts.

The original reports were not masked because it has been shown that masking was time consuming and did not have an important impact on the results of systematic reviews by reducing bias.27

Systematic Review of Primary Studies on Dissemination of Cancer Control Interventions

Literature Search

English-language citations were identified from the following sources - (1) MEDLINE, the US (NLM) database; (2) PreMedline; (3) CancerLIT; (4) EMBASE, the Excerpta Medica Database; (5) PsycINFO; (6) CINAHL; (7) Sociological Abstracts; (8) HealthSTAR; (9) references from articles marked for inclusion; and (10) the technical experts. A detailed description of these databases can be found in Table 4. The searches were performed between November 2001 and March 2002 (see Appendix C for specific search terms and dates). Search terms included “dissemination,” “diffusion,” “implementation,” and “adoption” as textwords, in conjunction with the topic-specific terms. Full search strategies are included in Appendix C.

Study Selection

In consultation with the technical experts, primary studies evaluating the dissemination of a cancer control intervention published in English in peer-reviewed journals were selected. Studies eligible for inclusion were primary studies published since 1980. Table 3 details the inclusion and exclusion criteria for each key question. All study designs were eligible for inclusion. According to the technical experts, it was unlikely that any primary studies would have evaluated the dissemination of cancer control interventions before 1980. Reports focusing exclusively on children or adolescents were excluded (Appendix E, Form 6).

Reliability of Study Selection

All citations yielded by the search were screened in duplicate using the eligibility criteria described above and in Appendix D. The articles were grouped according to the questions they addressed. The level of agreement between the observers was quantified using a weighted kappa statistic.c

Evaluating the Methodological Quality of Primary Studies

A standardized quality assessment tool developed by the Effective Public Health Practice Project23 was used to evaluate the methodological quality of these primary studies. It was adapted from those developed by Clarke et al.,28 and Jadad et al.29 As community interventions are often not evaluated by randomized trials, the tool reflects other possible study designs, and rates the following criteria: selection bias, study design, confounders, blinding, data collection methods (reliability and validity), withdrawals and dropouts, intervention integrity, and analyses. Based on a dictionary and standardized guide to assessing component ratings, each component was rated “strong,” “moderate,” or “weak.” Content and construct validity have been established.30 While the tool has not been published, it has been used in several systematic reviews26 conducted by members of the McMaster team. A comparison of the tool used in this review was made with the tool used in the Guide to Community Preventive Health Services.31 Eleven components are similar in both instruments (Appendix E, Forms 7 to 9).

Data Extraction from Primary Studies

Using the Guidelines for Full-text Relevance Screening for Questions Pertaining to Dissemination found in Appendix E (Form 7), two reviewers screened each article for inclusion. Selected reports were then assessed using the Quality Assessment Tool for Quantitative Studies and Component Ratings of Study (Appendix E forms 8 and 9).

Data Extraction and Synthesis for this Report

Evidence and summary tables were constructed to describe the most salient characteristics of the included studies. The local research team, in consultation with members of the partner organization and the TOO, evaluated the overall quantity and quality of the data available and decided that meta-analysis would be inappropriate to summarize the evidence on each of the research questions and for each of the topics of interest. The main reasons for this decision were substantial heterogeneity across the studies, inconsistency in outcome measurements, low methodological quality, and incomplete data reporting. The adjusted clustering effect was appraised as part of our quality assessment form. Therefore, the report represents a qualitative, systematic review of the existing evidence, emphasizing the implications for practice and the opportunities to fill existing knowledge gaps.

Peer Review Process

A list of potential peer reviewers was created at the outset of the study. During the course of the project, more names were added to this list by the McMaster Center and NCI. In May 2002, 53 people were approached by the McMaster team and asked if they would be willing to review this evidence report. Twenty-five of these people responded positively and were sent copies of this report. All reviewers received a copy of the “Structured Format for Referee's Comments” (Appendix F) and were encouraged to provide comments on the text. A list of the 24 reviewers names and their affiliation is provided in Appendix F. In addition, Dr. Patricia Huston agreed to be the criticism editor, synthesizing the comments and preparing feedback that enabled the systematic incorporation of comments into the final version of the report.

Footnotes

b

More than 5,000 titles and abstracts were examined as a result of these search strategies - 232 papers were retrieved for full-text screening and evidence tables prepared for 41 unique studies. The weighted Kappa statistic for agreement was 0.6367 (95% CI 0.53 to 0.75).

c

More than 6000 titles and abstracts were examined as a result of these search strategies - 456 papers were retrieved for full-text screening and evidence tables prepared for 31 unique studies. The weighted Kappa statistic for agreement was 0.5329 (95% CI 0.31 to 0.76).

Views

  • PubReader
  • Print View
  • Cite this Page

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...