U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

AHRQ Evidence Report Summaries. Rockville (MD): Agency for Healthcare Research and Quality (US); 1998-2005.

  • This publication is provided for historical reference only and the information may be out of date.

This publication is provided for historical reference only and the information may be out of date.

Cover of AHRQ Evidence Report Summaries

AHRQ Evidence Report Summaries.

Show details

105Measuring the Quality of Breast Cancer Care in Women: Summary

, , , , , , , , , , , , , and .

Current as of .

Introduction

The purpose of this systematic review of the scientific medical literature was to survey the range of measures assessing the quality of breast cancer care in women and to characterize specific parameters potentially affecting their suitability for wider use. The review was conducted by the University of Ottawa Evidence‐based Practice Center (UO‐EPC). Specific emphasis was placed on diagnosis, treatment (including supportive care), followup, and the reporting/documentation of this care. The population of interest was female adults diagnosed with or in treatment for any histological type of adenocarcinoma of the breast, including both in situ and invasive cancer. In addition to informing the research community and the public on the availability and utility of quality measures of breast cancer care, it is anticipated that the findings of this report will be used to help define an agenda for future research.

Two recent publications have suggested that the quality of health care received by Americans is less than ideal.1 , 2 In a survey of 30 health conditions ranging from osteoarthritis to breast cancer, McGlynn et al. observed that, on average, Americans received about half (54.9 percent) of the recommended medical care processes.2 This observation highlights a gap between ideal and actual care—that is, between what evidence has identified as recommended care and what Americans actually receive.2

Quality measures can highlight health care quality through the identification of gaps in care.2,3 Quality measures can address the question of how many women qualifying to receive a standard of breast cancer care by virtue of their clinical situation actually receive that care in timely fashion. Seen from a slightly different perspective, the question could be, How many health care professionals, when attending to women qualifying to receive a standard of breast cancer care by virtue of their clinical situation, actually deliver that care in timely fashion?

A quality measure (e.g., "percentage of women receiving radiotherapy after breast‐conserving surgery") is defined as a mechanism to quantify the quality of a selected aspect of care by comparing it to a criterion.3 It is a way to quantify the degree of adherence to a standard of care, or quality indicator (i.e., "radiotherapy after breast‐conserving surgery"). A quality indicator becomes a quality measure in the act of measuring adherence to the standard. However, adherence data, with their potential to indicate gaps in care, are de‐emphasized here because the purpose of this review was to survey the range of quality measures.

Ideally, quality indicators, and thus quality measures, are supported by scientific medical evidence (i.e., "evidence based"), indicating that the care (e.g., radiotherapy after breast‐conserving surgery) is linked to improved patient outcomes.4,5 They are not mere opinion or conjecture. Scientifical medical evidence is best synthesized via systematic review, followed by an expert panel consensus process to assure that the recommended care highlighted by the synthesis is clinically relevant, up to date, and practical to deliver.6 There are various types of quality indicators, and thus measures, relating to process (e.g., whether indicated care is provided, quality of delivery of this care); structure (e.g., available equipment); and outcome (e.g., quality of life [QOL], patient satisfaction with care, survival).3 , 610

A quality indicator should be specific, complete, and clearly worded concerning factors such as target population and timeliness of care. This is necessary to ensure that:

  • Different users share the same meaning and therefore yield the same or consistent observations (its "reliability" as a measure) when, on different occasions, they consult specific data sources (such as medical records) to obtain adherence data.
  • These observations unambiguously reflect what the quality indicator was intended to identify (its "validity" as a measure).

However, quality indicators receive varying degrees of attention and accrue varying degrees of success regarding their scientific development as formal quality measures. Their scientific soundness as quality measures, and thus the confidence in the meaningfulness of the observations they produce as well as their suitability for wider use, depend largely on their properties of reliability and validity.11 Sound reliability is demonstrated when a diagnostic test of cancer yields the same observation when administered twice, 6 hours apart. Sound validity characterizes this test if it has been shown to accurately and exclusively measure the characteristic indicating the presence of cancer. These "psychometric" properties are established through pilot‐testing with data sources containing indicator‐relevant data (e.g., medical records, cancer registries).11

In addressing the following questions, this systematic review sought to identify and describe quality measures with or without a history of scientific development.

Question 1: What measures of the quality of care are available to assess the quality of diagnosis of breast cancer in women, including appropriate use and quality of diagnostic imaging, breast biopsy, sentinel node biopsy; appropriate use of chest x‐ray, bone scan, CT scans, MRI, and blood tests; availability and accuracy of pathology staging and tumor marker status; availability, accuracy, and appropriate use of genetic testing; and patient‐reported QOL and patient satisfaction?

  • 1a: In what patient populations have these quality measures been used?
  • 1b: For what diagnosis‐related purposes have these quality measures been used?
  • 1c: What quality measures, if any, are available to assess differences in the quality of diagnosis of breast cancer in women related to patients' age, race, socioeconomic status, and ethnicity?
  • What is the evidence supporting the use of quality measures for the diagnosis of breast cancer in women exhibited in terms of:
    • 1d: The scientific evidence demonstrating a linkage to improvement in clinical or patient‐reported outcomes?
    • 1e: Their psychometric performance (e.g., validity, reliability, sensitivity and specificity, ceiling and floor effects)?

Question 2: What measures of the quality of care are available to assess the appropriate use and quality of treatment for breast cancer in women, including breast‐conserving surgery; mastectomy (including adequacy of surgical margins); lymph node surgery; reconstructive surgery; radiation therapy after breast‐conserving surgery and post‐mastectomy; adjuvant and neoadjuvant systemic therapy (chemotherapy and hormone therapy); hormonal and chemotherapy management of metastatic disease; dosing of radiation and chemotherapy; supportive care; and patient‐reported QOL and patient satisfaction?

  • 2a: In what patient populations have these quality measures been used?
  • 2b: For what treatment‐related purposes have these quality measures been used?
  • 2c: What quality measures, if any, are available to assess differences in the quality of treatment of breast cancer in women related to patients' age, race, socioeconomic status, and ethnicity?
  • What is the evidence supporting the use of quality measures for the treatment of breast cancer in women exhibited in terms of:
    • 2d: The scientific evidence demonstrating a linkage to improvement in clinical or patient‐reported outcomes?
    • 2e: Their psychometric performance (e.g., validity, reliability, sensitivity and specificity, ceiling and floor effects)?

Question 3: What measures of the quality of care are available to assess the appropriate use and quality of followup for breast cancer in women, including patient‐reported QOL and patient satisfaction?

  • 3a: In what patient populations have these quality measures been used?
  • 3b: For what followup‐related purposes have these quality measures been used?
  • 3c: What quality measures, if any, are available to assess differences in the quality of followup of breast cancer in women related to patients' age, race, socioeconomic status, and ethnicity?
  • What is the evidence supporting the use of quality measures for the followup of breast cancer in women exhibited in terms of:
    • 3d: The scientific evidence demonstrating a linkage to improvement in clinical or patient‐reported outcomes?
    • 3e: Their psychometric performance (e.g., validity, reliability, sensitivity and specificity, ceiling and floor effects)?

Question 4: What measures are available to assess the adequacy and completeness of documentation of pathology, operative, radiation, and chemotherapy reports?

While it was thought to provide additional value, a UO‐EPC plan to significantly expand the scope of the project originally requested was eventually dropped for practical reasons. It involved identifying quality indicators with the potential for development as quality measures. The strategy to achieve it entailed identifying, then synthesizing evidence‐based quality indicators derived from evidence‐based practice guidelines and systematic reviews, as well as from empirical evidence either highlighted in key journal‐published commentaries or nominated by clinical experts as having the potential to overturn or modify a recommended standard of care. This approach would require an evaluation of the strength of the scientific medical evidence supporting each quality indicator (i.e., the design types, power, quality/validity, effect sizes, and number of research studies), thereby providing a way to define its clinical "appropriateness." The stronger the evidence for the indicator (e.g., several high‐powered, high‐quality randomized controlled trials supporting a treatment), the greater would be the potential for its scientific development as a measure. However, the amount of evidence identified as pertinent to the expanded scope necessitated that the plan be dropped. Thus, the reviewers could not assess the strength of the evidence supporting each indicator, with the exception of data linking care to improved outcomes obtained in the adherence studies.

Methods

A Technical Expert Panel (TEP) with seven members provided advisory support to the project, including refining the questions and highlighting key variables requiring consideration in the evidence synthesis. The TEP supported both the value of the expansion, and then the reasonableness of the subsequent contraction, of the project scope.

Study Identification

A comprehensive search for citations under the expanded project scope was conducted using numerous bibliographic databases: MEDLINE®, CancerLit, Healthstar, PreMEDLINE®, EMBASE, CINAHL®, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, Cochrane Central Register of Controlled Trials, and Health and Psychosocial Instruments (HAPI). The main search strategy was designed to retrieve items published after 1992 relevant to breast cancer diagnosis and treatment and quality measures.

The EMBASE search was limited to non‐English articles or those with an entry week in the 6 months preceding the search. An additional search strategy was developed to retrieve systematic reviews of breast cancer treatment or diagnosis. It was executed in MEDLINE® and CancerLit, with retrieval limited to material with publication years of 1994 and later. Additional published or unpublished literature was sought through manual searches of reference lists of included studies and key review articles and from the files of content experts. A letter was written to a representative of the American Society of Clinical Oncology (ASCO) to obtain data concerning their quality measures currently under development. However, ASCO decided to wait, to formally disseminate these data. Various Web sites were searched, including AHRQ's National Quality Measures Clearinghouse™. After duplicate citations were removed via Reference Manager, a final set of 3,848 unique bibliographic records was identified and posted to an Internet‐based software system for review.

The population of interest was female adults diagnosed with or in treatment for breast cancer. This covered all histological types of adenocarcinoma, including in situ and invasive cancer. Exclusions, decided upon in consultation with the Federal partners and the TEP, involved inflammatory breast cancer, Paget's disease, and phyllodes tumors. Screening and prevention fell outside the review scope. Quality indicators involved in quality measurement efforts could index any domain (e.g., structure), be derived from any source (e.g., clinical practice guideline), and have been subjected to any degree of scientific development as a quality measure (i.e., from none to complete). Reference had to be made to each indicator's empirical evidence; adherence to a standard, or quality indicator, had to be measured with respect to at least one data source (e.g., medical records). Given the unique physical and psychosocial issues related to breast cancer (e.g., body image, self‐esteem), measures of QOL and patient satisfaction had to have been either adapted or developed for past or present use with breast cancer patients.

The standard of care had to have been published prior to the quality measurement effort and to have been available to guide care in those geographic locations where the population's patterns of care were assessed using this standard. Results of efforts to collect quality measurement data had to have been made available or actively disseminated (e.g., published) starting in 1993.

Three levels of screening for relevance, with two reviewers per level, were employed: focus on bibliographic records at level 1, then on retrieved articles at levels 2 and 3. The third level of screening was required to exclude reports describing clinical practice guidelines, systematic reviews, and commentaries/editorials that had initially passed into data abstraction under the expanded project scope. Calibration exercises preceded each step of the screening process. Excluded studies were noted as to the reason for their ineligibility using a modified QUOROM format.12 Disagreements were resolved by forced consensus and, if necessary, third‐party intervention.

Data Abstraction

Following a calibration exercise involving two studies, three reviewers independently abstracted the contents of each included study using an electronic data abstraction form developed especially for this review. Abstracted data were checked by a second reviewer. Data included the report characteristics (e.g., publication status); study characteristics (e.g., data sources); population characteristics (e.g., case characteristics such as size of tumor, level of lymph node involvement, presence/absence of metastasis); characteristics of the quality indicators used in quality measurement (e.g., data concerning reliability, validity, and links to outcomes); and quality measurements (e.g., overall adherence rate, variations in rates based on review‐relevant stratifications such as age). After a calibration exercise involving two included studies, each quality indicator used in quality measurement was assessed independently by two reviewers to determine the extent of its successful development as a quality measure ("trajectory of scientific development" scheme), ranging from no attempts to establish its reliability and validity to a consistent demonstration of the soundness of these properties. Disagreements were resolved via forced consensus.

Data Synthesis

An overarching qualitative synthesis described the progress of each citation through the stages of the systematic review. Data from relevant studies were synthesized qualitatively in response to key questions. A summary table provided a question‐specific overview of included studies' relevant data, presented in greater detail in evidence tables. Since the present review was concerned with surveying and describing relevant quality indicators, quantitative syntheses were considered to be outside the scope.

Results

Literature Search

Of 3,848 records entered into the initial screening for relevance, 2,937 were excluded. All but 16 of the remaining 911 records were retrieved and subjected to a more detailed relevance assessment. Four reports were never retrieved,1316 and 12 arrived too late to assess them further before this report was completed.1728 The second relevance screening then excluded 610 reports. A third level of screening, required because of the change in the scope of the project, excluded 225 reports. In total, 60 reports describing 58 studies met eligibility criteria. One study was described by two published reports.9 , 30 A second study was referred to in a published report31 and an abstract.32 The latter was the only abstract included, with all other reports published as journal articles.

Overview

In the 60 relevant reports and 58 studies, 143 quality indicators were identified. Many different populations were investigated—typically retrospectively—using various reference standards (e.g., clinical practice guidelines) and data sources (e.g., medical records). Younger women and those with early‐stage breast cancer were more likely to have been studied. Most standards reflected processes of care, focusing most often on whether or not women with breast cancer received indicated care. There were few investigations of the quality with which this care was delivered. Where gaps in care appeared to exist, they were invariably marked by patterns of underuse. Little can be said about the sparse evidence reflecting links to outcomes. The quality indicators were employed invariably to serve internal quality improvement or external quality oversight.

Other than a small number of studies (n = 11) employing different measures—primarily of QOL (n = 12)—virtually no scientifically validated quality measures were identified.3343 Instead, nearly all quality measurement efforts entailed quality indicators for which no reference was made and for which no data were reported indicating that they had been successfully developed scientifically as measures.

Of the 12 validated quality measures, all but one were used with reference to treatment and all but one assessed QOL. None pertained to followup or the documentation of care. Two QOL scales had been specifically validated for use with breast cancer populations. The Functional Assessment of Cancer Therapy Scale (FACT‐B, version 3) evaluated the QOL associated with a diagnosis of breast cancer.40 The European Organization of Research and Treatment of Cancer (EORTC) QLQ‐BR23 scale38 was employed to evaluate the impact of treatment. Other validated instruments included the Patient Satisfaction Questionnaire,3 8 Short Form‐36,33 , 35 , 37,39,43 EORTC‐C30,34,35 Medical Outcomes Scale,37 , 38 Spitzer Quality of Life Index,42 Uniscale,42 Ferrans Quality of Life Scale,41 Psychosocial Adjustment to Illness Scale,41 Guttman Health Status Questionnaire,37 and Linear Analogue Self‐Assessment Scale.36

Questions 1‐1e (Diagnosis)

In the diagnosis category, 26 quality indicators were identified, with the largest number (n = 11) falling within the general category, followed by breast biopsy (n = 7). QOL and patient satisfaction were each assessed once. The general category refers to quality indicators not fitting into the predefined categories established in the project. They reflected recommendations that women be seen by specific types of health care professionals for specific reasons and within certain time frames. The greatest number of studies evaluating a given quality indicator focused on a recommendation pertaining to the use of preoperative diagnosis by fine‐needle aspiration cytology, needle biopsy, or biopsy (n = 4). Most quality indicators referred to the delivery or receipt of indicated diagnostic care (75 percent, 18/24). Only five addressed the quality with which specific diagnostic care was delivered. One study observed sound reliability data for an instrument previously validated as a QOL measure.40 Types of care represented in the task order for which no quality measurements were found include sentinel node biopsy, chest x‐ray, bone scan, CT scan, MRI, blood tests, tumor marker status, and genetic testing. Adherence data stratified by race, ethnicity, or type of health care coverage were too scarce to permit the identification of any patterns of association.

Questions 2‐2e (Treatment)

Many more quality indicators were employed in the measurement of treatment quality (n = 67) than for diagnosis. Of these, the most frequently assessed types were adjuvant systemic therapy (n = 25) and radiation therapy (n = 16). No quality measurements were found relating to reconstructive surgery or neoadjuvant systemic therapy. The greatest number of studies employing a given treatment‐related quality indicator evaluated the appropriate use of breast‐conserving surgery (n = 18) and the appropriate use of radiotherapy after breast‐conserving surgery (n = 19). Most of the quality indicators referred to the delivery or receipt of indicated treatment (70.1 percent, 47/67). Nine quality indicators assessed the quality with which specific treatment care was delivered. Eleven validated quality measures were identified, with 10 assessing QOL and 1 assessing patient satisfaction with treatment.

When a subgroup of women (older, black, lower income, lower education, or with governmental health care coverage) appeared to be disadvantaged in terms of treatment, the quality indicators were defined in terms of whether or not they had received the indicated care. On the other hand, no subgroup of women for whom adherence data were reported (older, black, or with governmental health care coverage) was disadvantaged relative to their counterparts (younger, white, or with private health care coverage) when it came to the quality of the delivered care.

Questions 3‐3e (Followup)

Followup care was the focus of efforts to measure quality using five quality indicators. Specific types of care were not predefined. Two studies evaluated the appropriate use of guidelines for followup surveillance of breast cancer.

Question 4 (Reporting/Documentation of Care)

Several quality indicators were employed in quality measurement relating to reporting/documentation (n = 45), with pathology reporting being the most frequently assessed type of practice (n = 42). Neither surgical reporting nor radiotherapy reporting were the focus of quality measurement attempts. Two types of quality indicators were each evaluated in five studies: reporting the assessment of microscopic margins and reporting histological type (microscopic).

Discussion

Of the 143 quality indicators identified by this review, only a small minority had received any attention as to development into formal quality measures either prior to or during the adherence studies in which they were employed. One can, as a result, have little confidence in the meaningfulness of the gaps in care suggested by the adherence rates produced by quality indicators other than patient‐centered ones (i.e., QOL, satisfaction with care). Even the interpretability and generalizability of the results produced by McGlynn et al.'s rigorous effort2 to establish the support for and clinical relevance of their breast cancer care quality indicators were limited. Although based on a systematic review of the evidence and a peer consensus process, these results were limited by the small number of eligible breast cancer cases; the less than optimal level of evidence (observational evidence and expert opinion) supporting some standards, especially treatment standards; and the likelihood that their quality indicators had not been fully pilot tested as measures. It may be best to proceed with caution before allowing even minor policy decisions to be guided by any of the adherence data reviewed in this report.

The research implications of the present findings suggest the need to close the gap between existing ways of measuring the quality of breast cancer care and the ideal, scientific way required to highlight possible gaps in care. While more research is needed, employing principles by which any formal measure is derived, it may be wise to wait until the results of at least one important research project are reported before independently undertaking what ASCO may already be in the process of achieving.

At present, ASCO is developing a robust set of largely evidence‐based quality measures relating to stage I‐III breast cancer. Their goal is to produce a detailed profile of the reliability (e.g., inter‐rater), feasibility, and validity of measures based on pilot testing using multiple data sources (e.g., patient survey, the National Cancer Database of the American College of Surgeons). The results of ASCO's project are widely anticipated, since it is possible that they will have developed the validated measures required to push forward the field of quality measurement with respect to breast cancer care. It remains to be seen whether ASCO's quality measures will cover the definitions of care (e.g., quality of delivery of care, structural factors) identified by the present review as mostly absent from the literature.

A number of limitations characterized the present systematic review. In having to narrow the review scope, UO‐EPC lost the chance to go back to reference standards (e.g., clinical practice guidelines) and their evidence sources (empirical studies) to determine the clinical appropriateness of quality indicators in terms of the strength of the evidence linking these standards to improved outcomes. Eligibility criteria were predefined to include in the review only quality indicators that were evidence based. However, it was sometimes difficult to confirm either within or beyond a study report that the evidence authors noted as evidence based actually constituted empirical support. As a result, it is possible that some types of quality indicators included in the review could easily have been excluded. Finally, the "trajectory of scientific development" scheme was designed especially for this study without benefit of a validational process. The data obtained through its use are not likely to be overly reliable or valid. Nevertheless, almost none of the grades received by quality indicators suggested a history of scientific development, confirming what is likely the most unequivocal finding of this review: other than a few QOL or patient satisfaction instruments, no validated quality measures to quantify patterns of breast cancer care could be identified.

Some have asserted that the degree to which health care quality in the United States is consistent with quality standards is basically unknown and that the continuing failure to have a clear and comprehensive view of the level of quality care received by the average American will reinforce the belief that quality care is not a serious national problem.44 In our view, the failure to have reliable and valid quality measures with which to confidently point to possible gaps in breast cancer care—and to afford accountability, improvement, and research45—is a situation that does nothing to help resolve this important dilemma. Some promise is attached to ASCO's ongoing enterprise, although it will be some time before the results are known. Until validated quality measures are established, it will likely be impossible to derive a meaningful overview of gaps in breast cancer care that can inform the public about the quality of its health care choices.46

Availability of Full Report

The full evidence report from which this summary was taken was prepared for the Agency for Healthcare Research and Quality (AHRQ) by the University of Ottawa Evidence‐based Practice Center under Contract No. 290‐02‐0021. Printed copies may be obtained free of charge from the AHRQ Publications Clearinghouse by calling 800‐358‐9295. Requesters should ask for Evidence Report/Technology Assessment No. 105, Measuring the Quality of Breast Cancer Care in Women.

The Evidence Report can also be downloaded as a set of PDF files or as a zipped file.

AHRQ Publication Number 04‐E030‐1

Current as of September 2004

Internet Citation:

Schachter HM, Mamaladze V, et al. Measuring the Quality of Breast Cancer Care in Women. Summary, Evidence Report/Technology Assessment: Number 105. AHRQ Publication Number 04‐E030‐1, September 2004. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/clinic/epcsums/brcansum.htm

References

1.
Committee on the Quality of Health Care in America, Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academy Press, 2001.
2.
McGlynn EA, Asch SM, Adams J. et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26):2635–45. [PubMed: 12826639]
3.
Agency for Healthcare Research and Quality. National Quality Measures Clearinghouse. 2003. Available at: www​.qualitymeasures.ahrq​.gov/resources/measure_use.aspx. Accessed April 2, 2003. [PubMed: 21923316]
4.
Malin JL, Asch SM, Kerr EA. et al. Evaluating the quality of cancer care: development of cancer quality indicators for a global quality assessment tool. Cancer. 2000;88(3):701–7. [PubMed: 10649266]
5.
McGlynn EA. Selecting common measures of quality and system performance. Med Care. 2003;41(1 Suppl):I39–47. [PubMed: 12544815]
6.
Asch SM, Kerr EA, Hamilton EG, et al. Quality of care for oncologic conditions and HIV: a review of the literature and quality indicators. Santa Monica, CA: RAND, 2000.
7.
Rubin HR, Pronovost P, Diette GB. The advantages and disadvantages of process‐based measures of health care quality. Int J Qual Health Care. 2001;13(6):469–74. [PubMed: 11769749]
8.
Kahn KL, Malin JL, Adams J, et al. Developing a reliable, valid, and feasible plan for quality‐of‐care measurement for cancer: how should we measure? Med Care 2002;40(6 Suppl):III73‐85. [PubMed: 12064761]
9.
Center for Health Policy Studies, Harvard School of Public Health Center for Quality of Care Research and Education. Understanding and choosing clinical performance measures for quality improvement: development of typology. Final report. Rockville, MD: Agency for Healthcare Research and Quality, 1995.
10.
Lawthers AW, Palmer H. In search of a few good performance measures. In: Seltzer J, Nash DB, editors. Models for measuring quality in managed care: analysis and impact. New York: Faulkner and Gray's Healthcare Information Center, 1997.
11.
Streiner DL, Norman GR. Health measurement scales. A practical guide to their development and use. 2nd ed. New York: Oxford University Press, 2000.
12.
Moher D, Cook DJ, Eastwood S. et al. Improving the quality of reports of meta‐analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta‐analyses. Lancet. 1999;354(9193):1896–900. [PubMed: 10584742]
13.
Cunningham R. Perspectives. Indefinite results in ABMT (autologous bone marrow transplantation) trials add to challenges for practice standards, quality assurance in cancer care. Med Health. 1999;53(16 Suppl):1–4. [PubMed: 10387749]
14.
Breast care services: results of ECRI benchmarking study. Executive Briefings No. 56. Plymouth Meeting, PA: ECRI, 1997.
15.
Saeki T. What is the Japanese consensus on adjuvant chemotherapy in breast cancer? Breast Cancer. 2003;10(1):15–20. [PubMed: 12525758]
16.
Roila FB. Adjuvant systemic therapies in women with breast cancer: an audit of clinical practice in Italy. Ann Oncol. 2003;14(6):843–8. [PubMed: 12796020]
17.
Meijer WS. Follow‐up after oncological surgery. [Dutch]. Ned Tijdschr Geneeskd. 2002;146(4):186–7. [PubMed: 11845571]
18.
Recht A, Edge SB, Solin LJ. et al. Postmastectomy radiotherapy: clinical practice guidelines of the American Society of Clinical Oncology. J Clin Oncol. 2001;19(5):1539–69. [PubMed: 11230499]
19.
Hillner BE, Smith TJ, Desch CE. Hospital and physician volume or specialization and outcomes in cancer treatment: importance in quality of cancer care. J Clin Oncol. 2000;18(11):2327–40. [PubMed: 10829054]
20.
Schwartz GF, Solin LJ, Olivotto IA. et al. The Consensus Conference on the treatment of in situ ductal carcinoma of the breast, April 22‐25, 1999. Hum Pathol. 2000;31(2):131–9. [PubMed: 10685626]
21.
Duigou F, Herlin P, Marnay J. et al. Variation of flow cytometric DNA measurement in 1,485 primary breast carcinomas according to guidelines for DNA histogram interpretation. Cytometry. 2000;42(1):35–42. [PubMed: 10679741]
22.
Bland KI, Scott‐Conner CE, Menck H. et al. Axillary dissection in breast‐conserving surgery for stage I and II breast cancer: a National Cancer Data Base study of patterns of omission and implications for survival. J Am Coll Surg. 1999;188(6):586–95. [PubMed: 10359351]
23.
Lyman G, Djulbegovic B. Review and evaluation of clinical practice guidelines in oncology. Meeting abstract, 1998.
24.
Health care guideline: breast cancer treatment. In: Institute for Clinical Systems Integration 1996 health care guidelines, Volume 1. ICSI‐52. Bloomington, MN, 1996.
25.
Parvanova V. Modern indications for postmastectomy radiotherapy application. [Bulgarian]. Rentgenologiya i Radiologiya. 2002;41(4):275–80.
26.
Roumen RMH, Pijpers HJ, Thunnissen FBJM. et al. Summary of the guideline 'sentinel node biopsy in breast cancer. ' [Dutch]. Ned Tijdschr Geneeskd. 2000;144(39):1864–7. [PubMed: 11031679]
27.
Nattinger AB, Gottlieb MS, Hoffman RG. et al. Minimal increase in use of breast‐conserving surgery from 1986 to 1990. Med Care. 1996;34(5):479–89. [PubMed: 8614169]
28.
Berner AS. Fine‐needle aspiration cytology or core biopsy when diagnosing tumours of the breast. [Norwegian]. Tidsskr Nor Laegeforen. 2003;123(12):1677–9. [PubMed: 12821988]
29.
White J, Morrow M, Moughan J. et al. Compliance with breast‐conservation standards for patients with early‐stage breast carcinoma. Cancer. 2003;97(4):893–904. [PubMed: 12569588]
30.
Morrow M, White J, Moughan J. et al. Factors predicting the use of breast‐conserving therapy in stage I and II breast carcinoma. J Clin Oncol. 2001;19(8):2254–62. [PubMed: 11304779]
31.
Ray‐Coquard I, Philip T, Lehmann M. et al. Impact of a clinical guidelines program for breast and colon cancer in a French cancer center. JAMA. 1997;278(19):1591–5. [PubMed: 9370505]
32.
Ray‐Coquard I, Philip T, Lehmann M, et al. Impact of a clinical guidelines program on medical practice in a French cancer center. Meeting abstract. J Clin Oncol 1997;16. [PubMed: 9370505]
33.
Jansen SJ, Stiggelbout AM, Nooij MA. et al. Response shift in quality of life measurement in early‐stage breast cancer patients undergoing radiotherapy. Qual Life Res. 2000;9(6):603–15. [PubMed: 11236851]
34.
Osoba D, Burchmore M. Health‐related quality of life in women with metastatic breast cancer treated with trastuzumab (Herceptin). Semin Oncol. 1999;26(4 Suppl 12):84–8. [PubMed: 10482198]
35.
Chie WC, Huang CS, Chen JH. et al. Measurement of the quality of life during different clinical phases of breast cancer. J Formos Med Assoc. 1999;98(4):254–60. [PubMed: 10389369]
36.
Bernhard J, Hurny C, Coates AS. et al. Quality of life assessment in patients receiving adjuvant therapy for breast cancer: the IBCSG approach. The International Breast Cancer Study Group. [Erratum appears in Ann Oncol 1998 Feb;9(2):231]. Ann Oncol. 1997;8(9):825–35. [PubMed: 9358933]
37.
Frazer GH, Brown CH III,, Graves TK. A longitudinal outcome assessment of quality of life indicators among selected cancer patients. J Rehabil Outcomes Measure. 1998;2(2):40–7.
38.
Molenaar S, Sprangers MA, Rutgers EJ. et al. Decision support for patients with early‐stage breast cancer: effects of an interactive breast cancer CDROM on treatment decision, satisfaction, and quality of life. J Clin Oncol. 2001;19(6):1676–87. [PubMed: 11250997]
39.
Bower JE, Ganz PA, Desmond KA. et al. Fatigue in breast cancer survivors: occurrence, correlates, and impact on quality of life. J Clin Oncol. 2000;18(4):743–53. [PubMed: 10673515]
40.
Northouse LL, Caffey M, Deichelbohrer L. et al. The quality of life of African American women with breast cancer. Res Nurs Health. 1999;22(6):449–60. [PubMed: 10630287]
41.
Dow KH, Lafferty P. Quality of life, survivorship, and psychosocial adjustment of young women with breast cancer after breast‐conserving surgery and radiation therapy. Oncol Nurs Forum. 2000;27:1555–64. [PubMed: 11103374]
42.
Perez DJ, Williams SM, Christensen EA. et al. A longitudinal study of health related quality of life and utility measures in patients with advanced breast cancer. Qual Life Res. 2001;10:587–93. [PubMed: 11822792]
43.
Mor V, Malin M, Allen S. Age differences in the psychosocial problems encountered by breast cancer patients. J Natl Cancer Inst Monogr 1994;(16):191‐7. [PubMed: 7999464]
44.
McGlynn EA, Brook RH. Keeping quality on the policy agenda. Health Aff (Millwood). 2001;20(3):82–90. [PubMed: 11585185]
45.
Galvin RS, McGlynn EA. Using performance measurement to drive improvement: a road map for change. Med Care. 2003;41(1 Suppl):I48–60. [PubMed: 12544816]
46.
Chassin MR, Galvin RW. The urgent need to improve health care quality. Institute of Medicine National Roundtable on Health Care Quality. JAMA. 1998;280(11):1000–5. [PubMed: 9749483]

AHRQ Publication Number 04-E030-1

Views

  • PubReader
  • Print View
  • Cite this Page

Related publications

Related information

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...