This publication is provided for historical reference only and the information may be out of date.
Overview
The UO-EPC's evidence report on acute stroke is based on a systematic review of the scientific-medical literature to identify, and synthesize the results from studies addressing the 12 questions elaborated by the Acute Stroke Review Panel. Together with content experts, UO-EPC staff identified specific issues integral to the review. A Technical Expert Panel (TEP) provided expert guidance as to the conduct of the systematic review. Evidence tables presenting the key study characteristics and results from each included study were developed. Summary tables were derived from the synthesis tables. The methodological quality of the included studies was appraised, and individual study results were summarized. For some objectives, where meta-analysis was not appropriate, a narrative interpretation of the literature alone was provided.
Key Questions Addressed in This Report
From the UO-EPC Task three primary objectives, the following key questions were derived and addressed in this report.
Intracerebral hemorrhage (ICH)
- Does surgery for adults with acute ICH reduce stroke-related mortality and disability? (Intervention A)
- Does antihypertensive treatment reduce stroke-related mortality and disability? (Intervention B)
Cerebral Infarction
- Does intraarterial (IA) thrombolysis reduce stroke-related mortality and disability in adults with acute ischemic stroke? (Intervention C)
- Does treatment to normalize blood glucose levels reduce stroke-related mortality and disability in adults with acute stroke? (Intervention D)
- Does mechanical thrombus disruption reduce stroke-related mortality and disability in adults with acute ischemic stroke? (Intervention E)
Patient selection for thrombolysis
- Is the effectiveness and safety of thrombolytic therapy for adults with acute ischemic stroke affected by time from onset to treatment? (Intervention F)
- Do pretreatment CT scoring systems affect the safety and efficacy of thrombolytic therapy for acute ischemic stroke? (Intervention G)
- Do pretreatment MRI scoring systems affect the safety and efficacy of thrombolytic therapy for acute ischemic stroke? (Intervention H)
- Do CT perfusion/angiography affect the safety and efficacy of thrombolytic therapy for acute ischemic stroke? (Intervention I)
- Do patient characteristics (i.e., age, gender, co-morbidities, functional status, medications) alter the safety and effectiveness of thrombolysis for acute ischemic stroke? When available, this data was extracted from the studies and is reported under the individual interventions.
Systems of care
- Are community education programs effective in reducing stroke-related disability and mortality? (Intervention J)
- Are designated centers effective in reducing stroke-related disability and mortality? (Intervention K)
- Are ED protocols for the management of acute stroke effective in reducing disability and mortality? (Intervention L)
We did not systematically review the effectiveness of IV-tPA. This has been adequately assessed in existing reviews and has been well established for patients presenting within 3 hours. The seminal trial and a previously published meta-analysis is presented in the discussion section.
Analytic Framework
The purpose of this report was to systematically review the available literature in the field of acute stroke evaluation and treatment, in order to determine which interventions (delivered within the first 24 hours from onset of symptoms) are effective in reducing stroke-related morbidity or mortality. We also investigated the relationship between the safety and effectiveness of thrombolytic therapy and how they varied with the timing of the intervention in relation to the onset of symptoms. Effective acute stroke therapies require a system for rapid delivery of complex interventions and as such, occur within the context of an EMS capable of identifying and treating appropriate individuals. Each component of this system is in itself an intervention, which may result in harm if inappropriate candidates are submitted to treatment while others are missed. Thus, we have reviewed the evidence that specific systems of care (e.g., dedicated stroke programs) improve outcomes for patients with acute stroke.
The analytic framework is presented in Figure 1. This framework illustrates the review's context and our conceptual approach regarding relationships between symptom onset, interventions/decisions, and outcomes within the context of acute stroke. Within the framework, arrows indicate linkages (either preventive or treatment) with associated questions investigated by our review. For example, when an individual develops symptoms, what is the evidence for the effectiveness of community education programs in influencing the individual's decision to seek emergency medical attention. The framework also highlights our primary outcome measures associated with these linkages used to define either success or failure of these interventions (i.e., patients are well, disabled, or dead).
Study Identification
Search Strategy
Comprehensive search strategies for each individual question were developed and tested in the Medline database (Search Strategy 1 in Appendix A). Initially, existing searches from the Cochrane Stroke Group were consulted. Indexing terms from relevant articles identified in Medline were used, and terms and limits were applied (such as age and trial type). These were repeatedly tested to ascertain recall and precision. These strategies were modified in consultation with the review team and then executed by two librarians, one who built the search and another who validated the strategies. Many of the searches were limited to adult age groups and randomized controlled trials (RCTs) using the Highly Sensitive Search Strategy (HSSS), designed by the Cochrane Collaboration to identify RCTs in Medline. All searches related to question 1 were limited by age and study design, as were all question 2 topics except for question 2d, which was limited to adults but was not limited by the HSSS. Searches related to question three where not limited by either study design or age. Language restrictions were not imposed. Non-English articles were included but not used.
The databases searched were Medline (1966 to April Week 4 2004), Embase (last 6 months) and CINAHL (1982 to April Week 5 2004) using the OVID interface. Also searched were the Stroke Trials Directory and the Cochrane Stroke Group Registry, as well as conference proceedings from the 28th International Stroke Conference 2003 (Stroke, Feb 2004) and the American Academy of Neurology Annual Meeting (published in Neurology). The Effective Practices and Organisation of Care (EPOC) registry was searched by the Cochrane review group for controlled studies, including controlled before and after (CBA) and interrupted time series (ITB) designs.
Records identified through electronic searching were downloaded, and duplicate records identified and removed using citation management software (Reference Manager®). A total of 7,320 unique records were retrieved on the initial running of the search. An additional 163 unique records were retrieved on the updated search run nearing the project's completion. Therefore, after bibliographic records were retrieved through database searches and duplicate records were removed; a total of 7,483 unique items remained. The review team nominated four additional records. At the suggestion of the TEP, four prominent principle investigators were contacted regarding potential data from trials that had been prematurely terminated. Two investigators responded, however no unpublished data was provided.
Eligibility Criteria
Published and unpublished studies, reported in English,74–76 involving any research design (e.g., RCTs reported in English) and enrolling both male and female adult participants (age >16 years), including members of racial/ethnic populations with acute stroke, were eligible for inclusion if each also met the criteria outlined in Table 1. For studies regarding systems of care for acute stroke, the intervention studied may not be applied to patients with acute stroke (i.e., community education programs to increase awareness of symptoms of stroke) but they must be applied to improve the care of patients with acute stroke.
Study Selection Process
The results of literature searches were posted to the UO-EPC's internet-based software system for review. To enhance the speed and efficiency of conducting and managing the systematic review process, this software, which resides on a secure website, was used to enable the electronic capture and internal comparison (relative to explicit criteria) of multiple reviewers' responses to relevance screening questions, and to requests to abstract specific data (e.g., study quality) from bibliographic records or full reports.
All results of searches for evidence were provided to two reviewers for assessment. A 3-step process was used. First, all studies were screened by both reviewers by reviewing the bibliographic record (i.e., title, authors, key words, abstract) and applying the inclusion/exclusion criteria. The record was retained if it appeared to contain pertinent study information according to the inclusion/exclusion criteria or if there was not enough information provided to determine eligibility at this level. If the reviewers did not agree in finding at least one unequivocal reason for excluding the study, it was entered into the next phase of the review. The reasons for exclusion were noted using a modified QUOROM format (Appendix D).77
The second step of the review required screening of the full report of the study. The full reports were not masked given the equivocal evidence regarding the benefits of this practice.78–80 To be considered relevant at this second level of screening, all eligibility criteria had to be met as determined by both reviewers.
There are various templates for grading the strength of evidence. Almost all of these approaches rate randomized controlled trials (RCTs) at the top of the ranking scheme. This is not surprising as RCTs have a comparator group and participants are assigned to all treatment groups through randomization. Randomization is unique in that it ‘controls’ for known confounders and, perhaps more importantly, unknown ones. Adequate randomization has been shown to reduce the influence of bias on the results of RCTs. Other designs, such as cohort studies and case control ones, also offer some control over the influence of bias. This is because such designs incorporate a comparator group, even though there is no randomization.
What is less clear is the extent of bias in studies for which there is no controls (i.e., comparator group). Although it is feasible to provide data analytical “solutions” to such designs there is no adequate way to assess the influence of bias. In such circumstances it is pragmatic and scientifically prudent to limit systematic reviews to primary studies that have a comparator group.
Thus, in situations where multiple levels of evidence are available, it is generally preferable to focus available resources on synthesis of studies that provide higher levels of evidence (e.g. RCT, CCT, etc.). In addition to limiting bias, such studies focus on the contrast between groups, the impact of differences between studies may be much less than is often the case with studies lacking control groups. Thus, restricting our primary attention to higher levels of evidence, RCTs, CCT, cohort and case-control studies was thought to help limit one of the most troublesome issues in meta-analysis, namely statistical heterogeneity.
As such, and with approval from the TEP, a third level screening was implemented beyond full relevance assessment where we sought to include only, whenever possible, reports of RCTs. For questions for which there were at least three RCT reports, designs of other reports were excluded. Where reports of RCTs did not exist, lower level evidence was included, such as reports of observational studies.
All disagreements were resolved by consensus and, if necessary, a third party facilitated. Excluded studies were noted as to the reason for their ineligibility (see List of Excluded Studies for Level of Evidence and List of at the end of the report).
Data Abstraction
After training and following a calibration exercise involving two studies, two reviewers independently abstracted the contents of each included study using an electronic Data Abstraction form developed especially for this review (Appendix C). Once a reviewer completed their work, they then checked all of the data abstracted by their counterpart. Data abstracted included the characteristics of the:
- Report (e.g., publication status, year of publication)
- Study (e.g., sample size; research design; number of arms)
- Population (e.g., baseline characteristics)
- Intervention (e.g., IV thrombolytics according to time delivered post stroke)
- Withdrawals and dropouts
Summarizing the Evidence
Overview
Evidence tables in the Appendices offer a detailed description of the included studies (e.g., study design, population characteristics, intervention[s] and outcome[s]), with a study being represented only once. The tables are organized by research question and study design, with designs purporting to induce less bias coming before those designs were bias might be a more substantial problem (e.g., RCTs before single group pre-post studies). Question-specific Summary Tables embedded in the text report each study in an abbreviated fashion, highlighting some key characteristics, such as comparators and sample size. This allows readers to compare all studies addressing a given question. A study can appear in more than one Summary Table given that it can address more than one research question.
Study Quality
Evidence reports include studies of variable methodological quality. Differences in quality across, and within, study designs may indicate that the results of some studies are more biased (i.e., systematic error) than others. Systematic reviewers need to take this information into consideration to reduce or avoid bias whenever possible. In this report, study quality was assessed through examination of each individual report. No attempt was made to contact the authors of any report. Quality was defined as the confidence that the study's design, conduct, analysis, and presentation, has minimized or avoided biases in any comparisons.81 Several approaches exist to assess quality including components, checklists, and scales. For this report, we have elected to use a combination of methods in an effort to ascertain a measure of reported quality across different study designs.
For RCTs, the Jadad scale was used (Appendix C). This validated scale includes three items that assess the methods used to generate random assignments, double blinding, and a description of dropouts and withdrawals by intervention group.82 The scoring ranges from 1 to 5, with higher scores indicating higher quality. In addition, allocation concealment was assessed as adequate, inadequate or unclear (Appendix C).83 An a priori threshold scheme was used for sensitivity analysis—a Jadad total score of ≤2 indicates low quality and scores >2 indicates higher quality. For allocation concealment, adequate = 1, inadequate = 2, and unclear = 3.
Cohort and case-control study reports were assessed using the Newcastle-Ottawa scale (NOS). The NOS is an ongoing collaboration between the Universities of Newcastle, Australia and Ottawa, Canada to develop an instrument providing an easy and convenient tool for quality assessment of nonrandomised studies to be used in a systematic review. The scale uses a “star system” in which a study is judged on three broad perspectives: the selection of the study groups; the comparability of the groups; and, the ascertainment of either the exposure for case-control studies, or the outcome of interest for cohort studies. The inter- and intra-rater reliability of the NOS have been established. The face content validity of the NOS has been reviewed based on a critical review of the items by several experts in the field who evaluated its clarity and completeness for the specific task of assessing the quality of studies to be used in a meta-analysis. Further, its criterion validity has been established with comparisons to more comprehensive, but cumbersome, scales. An assessment plan is being formulated for evaluating its construct validity with consideration of the theoretical relationship of the NOS to external criteria and the internal structure of the NOS components.84
We did not conduct any sensitivity analysis of quality assessments on the observational studies, since there is little by way of guidance to suggest what a poor quality study's score would be based on for these assessment instruments.
Qualitative Data Synthesis
A qualitative synthesis was completed for all studies included in the Evidence Report. A description is provided of the progress of each citation through the review process, and includes information pertaining to each report, such as their sample size. The qualitative synthesis was performed on a question-specific basis, with studies grouped according to research design (e.g., RCTs, observational studies). Each synthesis includes a narrative summary of the key defining features of the study report, if stated, (e.g., a priori description of inclusion/exclusion criteria), population (e.g., diagnosis-related), intervention/exposure (e.g., use of IA thrombolysis), outcomes, study quality, applicability, and individual study results. A brief study-by-study overview typically precedes a qualitative synthesis.
Quantitative Data Synthesis
We performed meta-analyses of RCTs when interventions were clinically homogenous and two or more studies reported an outcome of interest. We focused on two dichotomous outcomes: (1) death, and (2) death or disability, measured as scores of 0 or 1 on the modified Rankin Scale (mRS). Odds ratios comparing the outcomes in the experimental and control groups were used as the effect measure for pooling. The chi-square test and the associated I2 statistic85 were used to assess heterogeneity in odds ratios. Meta-analytic pooling was performed using the DerSimonian and Laird random effects method.86
Publication Details
Copyright
Publisher
Agency for Healthcare Research and Quality (US), Rockville (MD)
NLM Citation
Sharma M, Clark H, Armour T, et al. Acute Stroke: Evaluation and Treatment. Rockville (MD): Agency for Healthcare Research and Quality (US); 2005 Jul. (Evidence Reports/Technology Assessments, No. 127.) 2, Methods.