U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Holden DJ, Harris R, Porterfield DS, et al. Enhancing the Use and Quality of Colorectal Cancer Screening. Rockville (MD): Agency for Healthcare Research and Quality (US); 2010 Feb. (Evidence Reports/Technology Assessments, No. 190.)

  • This publication is provided for historical reference only and the information may be out of date.

This publication is provided for historical reference only and the information may be out of date.

Cover of Enhancing the Use and Quality of Colorectal Cancer Screening

Enhancing the Use and Quality of Colorectal Cancer Screening.

Show details

2Methods

In this chapter, we document the procedures that the RTI International–University of North Carolina Evidence-based Practice Center (RTI–UNC EPC) used to develop this comprehensive evidence report on use and quality of screening tests for colorectal cancer (CRC). To provide a framework for the review, we first present the key questions and their underlying analytic framework. We then describe our inclusion and exclusion criteria, search and retrieval process, and methods of abstracting relevant information from the eligible articles to generate evidence tables. We also discuss our criteria for rating the quality of individual articles and for grading the strength of the evidence as a whole.

Technical Expert Panel (TEP)

In designing the study questions and methodology at the outset of this report, we consulted several technical and content experts, seeking broad expertise and perspectives. We identified five technical experts, in addition to the chair for the National Institutes of Health State-of-the-Science Conference on Enhancing Use and Quality of Colorectal Cancer Screening, for a total of six members (Appendix E). The TEP provided assistance throughout the project and contributed to the Agency for Healthcare Research and Quality (AHRQ’s) broader goals of (1) creating and maintaining science partnerships as well as public-private partnerships and (2) meeting the needs of an array of potential customers and users of its products. Thus, the TEP was both an additional resource and a sounding board during the project.

Divergent and conflicting opinions are common; we perceive them as healthy scientific discourse that contributes to a thoughtful, relevant systematic review. Nonetheless, in the end, study questions, design, and/or methodologic approaches do not necessarily represent the views of individual technical and content experts.

To ensure robust, scientifically relevant work, we called on the TEP to provide reactions to work in progress and advice on substantive issues or possibly overlooked areas of research. Specifically, TEP members participated in conference calls and discussions through e-mail to:

  • refine the analytic framework at the beginning of the project;
  • discuss the preliminary assessment of the literature, including inclusion/exclusion criteria; and
  • provide input on the information and categories included in evidence tables.

Because of their extensive knowledge of the literature, including numerous articles authored by TEP members themselves, and their active involvement in the field, we also asked TEP members to participate in the external peer review of the draft report.

Key Questions and Analytic Framework

Based on the key questions (KQs) described in Chapter 1, we developed an analytic framework to guide our systematic review. To recap, the KQs are as follows:

  • KQ 1: Background (recent trends in the use and quality of CRC screening tests);
  • KQ 2: Factors influencing use of CRC screening;
  • KQ 3: Effective strategies for increasing appropriate use of CRC screening and followup;
  • KQ 4: Current and projected capacities to deliver CRC screening and surveillance at the population level;
  • KQ 5: Effective approaches for monitoring use and quality of CRC screening; and
  • KQ 6: Needed research to make progress and have greatest public health impact in promoting the appropriate use of CRC screening.

Figure 1 depicts how we believe various factors interact to influence the appropriate use of CRC screening tests. The boxes are indicative of factors or outcomes of the process of obtaining appropriate tests; the circles are meant to depict some interaction or decision point in the process (i.e., the interaction between physician and patient and the patient’s decision point). KQs 1–5 are called out in the figure (dotted lines); the societal and health system factors are assumed to affect all steps in the process.

Figure 1. Analytic framework for the appropriate use of colorectal cancer screening.

Figure 1

Analytic framework for the appropriate use of colorectal cancer screening. COLO, colonoscopy; CRC, colorectal cancer; CT COLO, computed tomographic colonography; DNA Stool, Deoxyribonucleic acid fecal test; (more...)

Specifically, both KQ 1, which pertains to trends in use and quality of colorectal cancer screening, and KQ 5, which pertains to monitoring the use and quality, are considered to be outcomes of the process depicted in Figure 1. In the remainder of this systematic review, we assess the changes in trends over time and how the use and quality of the specific tests (i.e., colonoscopy, sigmoidoscopy, computed tomography [CT] colonography, and stool tests) are monitored. This includes paying particular attention to issues such as the extent to which overutilization and/or underutilization of tests is evident.21–22

Many factors have been shown in the literature to influence both the use and quality of tests. Although the patient is ultimately the one to decide whether to obtain screening,23 a discussion with the health care provider about screening needs and options can directly affect the decision.24–25 This discussion is depicted in the analytic framework as the point at which an interaction between key patient and provider characteristics occurs to guide the discussion.

As shown in the two boxes on the far left of the analytic framework (Figure 1), both the patient and the provider bring characteristics to this interaction that are immutable yet likely to influence the provider’s recommendations for CRC screening and the patient’s ultimate decision to seek it. Termed “predisposing” by Green and Kreuter, these factors exert their effects before a behavior occurs by increasing or decreasing a person’s or a population’s motivation to undertake that particular behavior.26 Predisposing patient characteristics that may influence the ultimate decision related to CRC screening include

  • family history of CRC;
  • perceived risk or understanding of whether they are likely to be diagnosed with CRC;
  • education level, income, and other socioeconomic factors;27 and
  • location of residence (i.e., proximity to screening facilities and/or providers).28

Predisposing physician characteristics that have been shown to influence screening recommendations24,29 include

  • perceived effectiveness of each type of CRC screening test;
  • physician demographic characteristics such as age, whether solo or group practice, and location of practice; and medical training and awareness of current screening guidelines.

Literature Search

To identify articles relevant to each KQ we searched three electronic databases— MEDLINE®, the Cochrane Library, and the Cochrane Central Trials Registry—for articles published from January 1998 through September 2009. We used either Medical Subject Headings (MeSH or MH) as search terms when available or key words when appropriate. MeSH terms for our searches included colorectal neoplasms, colonoscopy, sigmoidoscopes; major headings included mass screening; and key terms included stool test, FOBT, and DNA stool. The full search strategy of exact search strings is presented in Appendix A.

Our initial searches of electronic databases produced 3,029 unduplicated records. We supplemented our electronic searches by manually searching reference lists of included studies, pertinent review articles, and editorials. Additional included studies were identified from recommendations of members of the TEP and by peer reviewers. We imported all citations into an electronic database (EndNote X.3).

Study Selection Process

Inclusion and Exclusion Criteria

As noted in Chapter 1, this systematic review focuses on the use and quality of CRC screening procedures. We developed detailed eligibility criteria with respect to population, interventions, outcomes, time period, and study design (Table 1). We limited eligible studies to those conducted in the United States so that the data would reflect domestic health care concerns, practices, and guidelines. We also restricted our searches to studies published in 1998 or later to ensure that results had relevance to current trends and practice for CRC screening. We excluded studies that (1) were published in languages other than English, (2) did not report information pertinent to the KQs, (3) had fewer than 30 subjects for randomized or nonrandomized controlled trials or fewer than 100 subjects for observational studies, (4) were not original research, or (5) evaluated interventions that were conducted in academic settings that would not be applicable to most practice settings.

Table 1. Inclusion/exclusion criteria.

Table 1

Inclusion/exclusion criteria.

We examined abstracts of all articles to determine whether studies met our eligibility criteria. Two members of our research team reviewed each abstract independently for inclusion or exclusion, using an Abstract Review Form (Appendix B). If one reviewer concluded on the basis of the abstract that the article should be considered in the review, we obtained the full text. Two members of our research team then independently reviewed each full-text article for inclusion or exclusion using a Full Text Review Form (Appendix B). The two relevant reviewers discussed disagreements; when they could not reach consensus, the team met and discussed the article to determine as a group whether the study met eligibility criteria. Articles that did not meet criteria for inclusion are listed in Appendix D along with reasons for exclusion.

KQs 1 and 6, although part of this report, are not part of the systematic review. Therefore, studies described or discussed for those KQs did not have to satisfy final inclusion/exclusion criteria; such articles are not included in the overall number of included studies for the systematic review. We developed a “Background” category for articles that could provide useful information for KQ 1, KQ 6, the introduction, or the discussion.

Literature Synthesis

Data Abstraction

We designed and used a structured data abstraction form. Trained reviewers abstracted data from each study and assigned an initial quality rating. A second reviewer read each abstracted article, evaluated the accuracy, completeness, and consistency of the data abstraction, and confirmed the quality rating. If differences in quality ratings could not be resolved by discussion, a third senior reviewer was involved. The full research team met regularly during the article abstraction period to discuss global issues related to the data abstraction process.

The final evidence tables are presented in their entirety in Appendix C. Studies are presented in the evidence tables alphabetically by the last name of the first author. A list of abbreviations and acronyms used in the tables appears at the beginning of Appendix C.

Rating Quality of Individual Studies

To assess the quality (internal validity or risk of bias) of studies, we used predefined criteria based on those described in the AHRQ Methods Guide for Comparative Effectiveness Reviews (ratings: good, fair, poor).30

Elements of quality assessment for trials included, among others, the methods used for randomization, allocation concealment, and blinding; the similarity of compared groups at baseline; maintenance of comparable groups; overall and differential loss to followup; and the use of intention-to-treat analysis. We assessed observational studies based on the potential for selection bias (methods of selection of subjects and loss to followup), potential for measurement bias (equality, validity, and reliability of ascertainment of outcomes), adjustment for potential confounders, and statistical analysis.

In general terms, a “good” study has the least bias and results are considered to be valid. A “fair” study is susceptible to some bias but probably not sufficient to invalidate its results. The fair-quality category is likely to be broad, so studies with this rating will vary in their strengths and weaknesses. A “poor” rating indicates significant bias (stemming from, e.g., serious errors in design, analysis reporting large amounts of missing information, or discrepancies in reporting) that may invalidate the study’s results.

Studies that met all criteria were rated good quality. The majority of studies received a quality rating of fair. This category includes studies that presumably fulfilled all quality criteria but did not report their methods to an extent that answered all our questions. Thus, the fair-quality category includes studies with quite different strengths and weaknesses. Studies that had a fatal flaw (defined as a methodological shortcoming that leads to a very high probability of bias) in one or more categories were rated poor quality and excluded from our analyses. Poor-quality studies and reasons for that rating are presented in Appendix F.

Grading Strength of Evidence

We evaluated the overall strength of evidence for the questions addressing the main outcomes of our review (KQs 3, 4, and 5) based on an approach devised for AHRQ’s Method Guide.30–31 Developed to grade the overall strength of a body of evidence, this approach incorporates four key domains: risk of bias, consistency, directness, and precision. It also considers other optional domains that may be relevant for some scenarios, such as a dose-response association, plausible confounding that would decrease the observed effect, strength of association (magnitude of effect), and publication bias. The evaluation of risk of bias includes assessment of study design and aggregate quality of studies.31

We graded evidence as consistent when effect sizes across studies were in the same direction and had a narrow range. When the evidence linked the interventions directly to our outcomes of interest, we graded the evidence as being direct. We graded evidence as being precise when results had a low degree of uncertainty. At least two members of our research team evaluated the overall strength of evidence for each outcome based on a qualitative assessment of strength of evidence for each domain and reconciled all disagreements.

The levels of strength of evidence are shown in Table 2. As mentioned, we present the strength of evidence assessments only for KQs 3, 4, and 5. These are the three KQs that are analytic and required an assessment of the body of literature available for this review. KQ 2 is descriptive and did not lend itself to an assessment of the strength of evidence. The strength of evidence tables appear in Chapter 4 as part of the presentation of results for KQs 3, 4, and 5.

Table 2. Strength of evidence grades and definitions.

Table 2

Strength of evidence grades and definitions.

Applicability

We evaluated the applicability of the evidence based on a qualitative assessment of the population, intensity or quality of treatment, choice of the comparator, outcomes, and timing of followup. We based our parameters for evaluation on guidance provided by AHRQ’s Methods Guide.30 Specifically, we considered whether enrolled populations differ from target populations, whether studied interventions are comparable with those in routine use, whether comparators reflect best alternatives, whether measured outcomes are known to reflect the most important clinical outcomes, and whether followup was sufficient.

Peer Review

This draft report was subjected to external peer review by eight individuals who were experts in fields relevant to CRC screening or from various stakeholder and user communities (listed in Appendix E). We provided the draft report to them on September 14, 2009. All eight provided thoughtful feedback on the report, including providing us with additional references that we should consider for inclusion in the final report. We reviewed all additional references and included those that were appropriate and within the scope of this report. We also addressed all comments and revised the report accordingly.

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (4.5M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...