U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

McKibbon KA, Lokker C, Handler SM, et al. Enabling Medication Management Through Health Information Technology. Rockville (MD): Agency for Healthcare Research and Quality (US); 2011 Apr. (Evidence Reports/Technology Assessments, No. 201.)

  • This publication is provided for historical reference only and the information may be out of date.

This publication is provided for historical reference only and the information may be out of date.

Cover of Enabling Medication Management Through Health Information Technology

Enabling Medication Management Through Health Information Technology.

Show details

Methods

The objective of this report is to review and synthesize the available evidence regarding the effectiveness and effects of health IT on all phases of medication management, as well as reconciliation and education. The report considers a broad range of health ITs and medication management processes and concentrates on those people involved in direct clinical care: physicians, pharmacists, dentists, nurses, and other health professionals; patients and their informal caregivers; and health care administrators across all health care settings and levels of care.

Recruitment of Technical Experts and Peer Reviewers

The Medication Management through Health Information Technology (MMIT) team was made up of experts from McMaster University, the University of Pittsburgh, and McGill University. Expertise of the group included medical informatics, primary care, geriatrics, internal medicine, pharmacy, conduct of clinical trials, and systematic literature reviews. Our Technical Expert Panel (TEP) was comprised of 12 external experts from diverse professional backgrounds including medication safety, health information technology in medication management, consumer informatics, and pharmacy. Their clinical expertise included specialization in pharmacy, geriatrics, reproductive health, pediatrics, and primary care. The TEP was involved in the development of the project by helping to refine the questions, focus the scope, solidify and streamline definitions, and approve modified plans and project direction. The members of the TEP and the external reviewers are listed in Appendix E. We also sought advice from other AHRQ Evidence Based Practice Centers who had completed health IT evidence summaries.

Key Questions

The core team worked with the external advisors, the TEP, and representatives of the AHRQ to refine the key questions (KQ) presented in the “Scope and Purpose of the Systematic Review” section of Chapter 1. Before searching for the relevant literature, the content of the questions was clarified, the concepts were defined, and the types of evidence that would be included in the review were ascertained.

KQ1. Effectiveness addresses the evidence that health IT applications improve a broad range of outcomes when health IT is applied to medication management (five phases plus the impact of postprofessional and patient education and reconciliation among those phases). Studies that reported changes in process, cost and economics, intermediate, qualitative, and clinical patient outcomes are included.

Much literature addresses the use of health IT in medication management. To address the MMIT question using the best available research findings, two limitations were placed on the included articles. First, only hypothesis-driven articles were included. For quantitative articles this meant that those with comparison groups and appropriate statistical analysis were analyzed in this report. Qualitative studies were included if they reported use of recognized qualitative methods. Many other articles met our inclusion criteria for content and measured an outcome of interest but they were not hypothesis-driven; the report lists these citations in the KQ1: Effectiveness section of Chapter 3: Results.

KQ2. Gaps in Knowledge or Evidence addresses knowledge and evidence deficits regarding needed information to support estimation of costs, benefits, impact, and net value regarding MMIT applications.

KQ3. Value Proposition requires the identification of information about the MMIT applications needed for each stakeholder to have a clear understanding of the value proposition particular to them. It was determined that the answers to KQ2: Gaps and KQ3: Value Proposition would become evident from the review of the evidence in KQ1: Effectiveness, although studies addressing values propositions by stakeholders are also included.

KQ4. System Characteristics addresses the impact of MMIT application features on the likelihood that the systems will be purchased, implemented, and used. This evidence comes from studies measuring implementation, use, and purchasing decisions. Studies of all designs are included.

KQ5. Sustainability addresses the factors influencing the sustainability of MMIT applications, specifically: (a) the impact of the type of setting, and (b) the impact of access to other electronic data on health care quality and safety. Sustainability is not well-defined. The definition of sustainability provided by Humphreys et al.,9 “the ability of a health service to provide ongoing access to appropriate quality care in a cost effective and health-effective manner” was incorporated. This definition restricted the number of articles that were included in this review. The topic of sustainability is one that needs further research in defining and further analyses of existing systems.

KQ6. Two-way EDI relates to the barriers and facilitators to complete two-way electronic data interchange (EDI) between prescribers and pharmacists and how these factors vary across stakeholder groups. Through discussions with experts and the MMIT writing group we determined that the evidence would be sparse in this category. Any article studying EDI communication (one- and two-way) that includes original data (qualitative or quantitative) is included in the report.

KQ7. RCTs of CDSS addresses the extent to which clinical decision support is integrated into health IT systems for medication management and the impact of CDSS on process and health outcomes. Because of the size of the literature and the improved level of evaluation rigor and generalizability or applicability of RCTs, only RCTs are included. This question included changes in process as well as the broad range of outcomes included in KQ1: Effectiveness (clinical outcomes, behavior change, and costs and economics) across the phases of medication management as well as reconciliation and education.

Analytic Framework

To provide a focus and structure for this review, an analytical model that incorporated the key component for seven key questions was developed. This provided direction for the literature search and guidance for the data abstraction and reporting (Figure 2).

Figure 2 is the analytic framework integrating the seven Key Questions addressed in this review. This diagram depicts how the key questions fit in with the analytic model. The framework has 8 components or boxes, knit together using the 7 Key Questions. Central to the framework is the medication management model by Bell (prescribing, order communication, dispensing, administering, and monitoring) and education and reconciliation. Another component is the outcomes as set out in the key questions (health care processes satisfaction, usability, knowledge, skills, and attitudes, population outcomes, composite outcomes, implementation, and quality and safety of care (KQs 1, 2, 5, and 7)). Type of medication and clinical outcomes (another 2 components of the framework) also flow from the phases and relate to KQ 1 on effectiveness. The other 4 components of the framework are MMIT application types (e.g., CDSS, EMRs, ePrescribing, bar codes, CPOE, pharmacy information systems, PDAs, and personal health records) (relating to KQs 1, 7). Technology characteristics including open sources, proprietary, confirmation interoperability and CCHIT conformity (relating to KQs 4, 5b, 6, 7). All players are represented in another component (prescribers, clinicians, nurses, patients, families, pharmacists, and administrators; relating to KQs 1, 2, 3, 6). The final component is the settings: In patient, ambulatory, long term care, and pharmacies (institutional, stand-alone, chains, health insurance, and mail order), community and home (relating to KQs 5, 6a).

Figure 2

Conceptual model addressing the seven key questions: enabling medication management through health IT. CDSS = computer decision support system, EMR = electronic medical records system, e-RX = e-prescribing, BCMA = bar code medication administration, CPOE (more...)

Literature Search Methods

In the course of searching the literature, reference sources were identified; a search strategy for each source was formulated, executed, and documented (see Appendix A, Exact Search Strings). For the searching of electronic databases, database-appropriate subject headings and text-words were used. Given the broad range of questions and outcomes that the report addresses, searches were performed by first using text-words relating to the various types of health IT applied to medication management. These searches were combined with both medication management terms and computer and technology terms. No limits based on methodological terms were used as all study designs were considered. A number of grey literature resources and AHRQ resources were also searched (see Appendix A, Exact Search Strings).

The search strategies were peer reviewed by a librarian following the Peer Review of Electronic Search Strategies (PRESS) checklist process for systematic review searches.35 The TEP and internal team provided references from their personal files. The reference lists of review articles were screened for eligibility.

Sources

The following databases were searched: MEDLINE,® EMBASE,® CINAHL® (Cumulated Index to Nursing and Allied Health Literature), Cochrane Database of Systematic Reviews, International Pharmaceutical Abstracts,© Compendex,© INSPEC© (which includes IEEE®), Library and Information Science Abstracts,® E-Prints in Library and Information Science,® PsycINFO,® Sociological Abstracts,© and Business Source® Complete. The search terms used are presented in Appendix A.

Supplemental searches targeting grey literature sources were conducted and included New York Academy of Medicine, SIGLE, U.S. HHS Health Information Technology, Health Technology Assessment reports from the U.K. Centre for Reviews and Dissemination, ProQuest Dissertations, National Library for Health United Kingdom (includes Bandolier), ProceedingsFirst, PapersFirst, National Technical Information Service, and Google. As part of the grey literature search, AHRQ made all references in their e-Prescribing, bar coding, and CPOE knowledge libraries available.

Search Terms and Strategies

Terms related to specific MMIT applications and in combination with both medication management terms and more general computer and technology terms, were prepared. The MEDLINE® search formed the basis for all other databases, but searches were edited as needed depending on the features of the database being used. When possible, letters, editorials or commentaries, and animal studies were excluded electronically. No limits were placed on language or time to capture the global literature and early studies.

Organization and Tracking of the Literature Search

Searching was done in the fall of 2009 and updated in early summer 2010. The results of the searches were downloaded into Reference Manager® version 10 (ISI ResearchSoft) and uploaded into our customized systematic review management system (Health Information Research Unit, McMaster University). The system is Web -based. It allows management of the systematic review process with improved auditing and control capabilities including automatic production of tables and tabulations. The system stores the full text of articles in portable document format (PDF) and tracks duplicates, results of title and abstract review, which articles were included or excluded with reasons, and data abstraction levels.

Title and Abstract Review

The study team reviewed titles and abstracts of all articles retrieved using prepared data abstraction forms (Appendix B, Sample Screening and Data Abstraction Forms). Two blinded, independent reviewers from a team of reviewers conducted title and abstract reviews in parallel. Both reviewers had to indicate that the article was to be excluded for it to be removed. Both reviewers also had to agree on inclusion for the article to be promoted to the next level. In the case of disagreements, a third reviewer determined if the article was to be promoted to the next level of screening.

This first review level was designed to detect all articles that reported on medication management with health IT assisting in the medication management process. Reviewers were instructed to consider applications as health IT if they were integrated with other information systems (rather than stand-alone applications or devices), with the systems being more than passive vehicles for data transfer. We defined health IT as electronic systems that collect, process, or exchange health information about patients and formal caregivers. We included articles only if the MMIT was integrated with at least one health IT system, such as EHR or EMR systems, and that it processed patient-specific information and provided advice or suggestions to either the health care provider or the patients and their families on issues related to health or wellness care. We excluded stand-alone devices (no integration) with the exception of personal digital assistants (PDAs) or handheld devices into which clinicians or patients entered patient-specific information to assist in medication management. PDAs are an important focus for AHRQ. All articles about transmission or order communication between pharmacist and clinical prescriber were also included and tagged as Electronic Data Interchange (EDI).

Review articles were passed through to the second level of screening. Once identified, the bibliographies of the reviews were screened for articles with potential for inclusion and their citations were put through the screening process starting at the title and abstract level if they had not already been captured by the original search. The systematic reviews were also included in the answers to the seven key questions where appropriate.

Defining Medication Management Health IT

To be clear on what kinds of applications were included in MMIT, the following outline for MMIT applications was devised and used by screeners.

MMIT systems or programs were included if:

  • The computer or technology processed patient-specific information,
  • The information provided by the system was relevant to one of the five phases of medication management or two ancillary aspects (education and reconciliation):
    • Prescribing or ordering medications,
    • Order communication (transmission, clarification, verification),
    • Dispensing,
    • Administering (by health care provider, patient, or caregiver),
    • Monitoring (signs, symptoms, or laboratory data to ascertain patient adherence, adverse events, or the need for medication adjustment),
    • Education (of patients or care providers, but not preprofessional education),
    • Reconciliation of medication lists,
  • Someone (e.g., patient, caregiver, family, health care professional) received information in return that was, or could be, linked to patient-specific information used in decisionmaking,
  • The technology was part of, or linked to, another electronic information system,
  • The article contained outcome data related to one of the areas of interest set out in the key questions.

Articles were to be excluded if they were health IT systems or programs and:

  • The IT component was only Web or local browsing of general health information databases or information resources (e.g., online textbooks),
  • The system acted as a conduit of information only (except order communication of prescriptions between health care providers and pharmacists),
  • Systems where no feedback was provided for patient care (e.g., surveys),
  • The system did not help with medication management decision making or provide information about any of the medication management phases (prescribing, order communication, dispensing, administering, and monitoring), or education and reconciliation,
  • Systems that made measurements but did not process the information,
  • Stand-alone devices that do not integrate with information systems (except PDAs using patient specific information),
  • The health IT application was used only to extract data (e.g., pill bottles that track opening and closing, smart infusion pumps not tied to other systems, studies using EMRs for data collection if the data were for quality improvement or other related tasks but not direct patient care).

Data Abstraction

Given the range of questions addressed, data abstraction was performed by a core group of staff for KQ1 and KQ7. Abstraction was done by one reviewer, and the accuracy was checked by a second reviewer. The authors of the report performed a final check on the abstracted data. The reviews were not blinded in terms of the article authors, institutions, or journal.

  • For all articles, reviewers abstracted information on general study characteristics: study design, the intervention, study population, setting, disease, drugs of interest, and description of the MMIT application (see Appendix B).
  • Outcomes data were abstracted from the articles that were applicable to KQ1: Effectiveness and KQ7: RCTs of CDSS regarding the MMIT application impact on a health, health care process, or other intermediate outcomes.
  • We abstracted only the main endpoints (major endpoints) that authors indicated as such. If no main endpoint measures were indicated, we abstracted data on outcomes related to medication management and clinical outcomes and relied on the order that those outcomes were presented in the results section, methods description, or abstract.
  • We saw great variation in the way outcomes and statistical methods were reported by article authors, even when using similar systems. As a result, for this report it was recorded whether the main endpoint was positively changed by the intervention (noted as + in Appendix C, Evidence Tables). The main endpoint could also be unchanged (noted as = in Appendix C, Evidence Tables). Some studies reported a negative effect where the predefined outcome was found to be in the opposite direction sought (noted as – in Appendix C, Evidence Tables). For example, measuring an increased time to prescribe when the MMIT system was developed to reduce prescriber time. In addition, those studies that identified unintended consequences (adverse effects) of the MMIT systems are summarized in their own section. If more than one main endpoint was reported, the positive and negative referred to the direction of the majority of outcomes.

Articles addressing KQ4: System Characteristics, KQ5: Sustainability and KQ6: two-way prescription EDI were abstracted separately to capture relevant outcome data.

Assessment of Study Quality

The included studies were assessed on the basis of the quality of their reporting of relevant data. Quantitative studies were assessed using the same criteria employed by Jimison et al.,3 in a previous AHRQ report. RCT scoring was based on Delphi consensus work by Verhagan and colleagues,10 and is referred to in this report as the ‘Verhagen/AHRQ RCT quality scale.’ Quality assessments of applicable articles were performed by more experienced reviewers to maintain consistency and accuracy. Studies with before-after, time series, surveys, and qualitative methods were not assessed for quality because few well-validated instruments exist and the study design itself is considered lower on the hierarchy of evidence.

Method assessments used for articles of the relevant design:

Verhagen/AHRQ RCT quality scale (scored out of nine)

  1. Was the assignment to the treatment groups really random?
  2. Was the treatment allocation concealed?
  3. Were the groups similar at baseline in terms of prognostic factors?
  4. Were the eligibility criteria specified?
  5. Were outcome assessors blinded to the treatment allocation?
  6. Was the care provider blinded?
  7. Was the patient blinded?
  8. Were the point estimates and measure of variability presented for the main endpoint measure?
  9. Did the analyses include an intention to treat analysis

Cohort studies (scored out of ten)

  1. Was there sufficient description of the groups and the distribution of prognostic factors?
  2. Are the groups assembled at a similar point in their disease progression?
  3. Is the intervention/treatment reliably ascertained?
  4. Were the groups comparable on all important confounding factors?
  5. Was there adequate adjustment for the effects of these confounding variables?
  6. Was a dose response relationship between intervention and outcome demonstrated?
  7. Was outcome assessment blind to exposure status?
  8. Was followup long enough for the outcomes to occur?
  9. What proportion of the cohort was followed-up?
  10. Were drop out rates and reasons for drop out similar across intervention and unexposed groups?

Case-control studies (scored out of nine)

  1. Is the case definition explicit?
  2. Has the disease state of the cases been reliably assessed and validated?
  3. Were the controls randomly selected from the source of population of the cases?
  4. How comparable are the cases and controls with respect to potential confounding factors?
  5. Were interventions and other exposures assessed in the same way for cases and controls?
  6. How was the response rate defined?
  7. Were the nonresponse rates and reasons for nonresponse the same in both groups?
  8. Is it possible that over-matching has occurred in that cases and controls were matched on factors related to exposure?
  9. Was an appropriate statistical analysis used (matched or unmatched)?

Case series (scored out of six)

  1. Is the study based on a representative sample selected from a relevant population?
  2. Are the criteria for inclusion explicit?
  3. Did all individuals enter the survey at a similar point in their disease progression?
  4. Was followup long enough for important events to occur?
  5. Were outcomes assessed using objective criteria or was blinding used?
  6. If comparisons of subseries are being made, was there sufficient description of the series and the distribution of prognostic factors?

Data Synthesis

Evidence tables with article details were created and ordered by key question, subquestion, and medication management phase as applicable (Appendix C). This offered another opportunity to check abstracted elements with the original articles; any errors were brought to the attention of the abstractors of the specific section for correction. Meta-analyses were not performed on any data because of the heterogeneity of the studies, as well as the nature of the observational studies in most sections.

Data Entry and Quality Control

General study data for each article was abstracted by one staff member and entered into the online data abstraction forms (Appendix B). Second reviewers were generally more experienced members of the research team, and one of their main priorities was to check the quality and consistency of the first reviewers’ answers and to perform the quality assessment where required.

Grading the Evidence

Because so much of the material was derived from observational studies, we did not provide grades for the evidence beyond quality scoring of the RCTs, cohort, case-control, and case series studies.

Peer Review

Throughout the project, the core team sought feedback from internal advisors and technical experts. These technical experts were members of the TEP and other content and methodology experts as needed. The report was reviewed in several stages, comments considered and incorporated into this final report. Members of the TEP and the peer reviewers are listed in Appendix E. Many of the TEP members also reviewed the initial version of the document. Both the members of the TEP and the review panel provided valuable comments and have made the final document stronger.

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (8.4M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...