U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Raftery J, Young A, Stanton L, et al. Clinical trial metadata: defining and extracting metadata on the design, conduct, results and costs of 125 randomised clinical trials funded by the National Institute for Health Research Health Technology Assessment programme. Southampton (UK): NIHR Journals Library; 2015 Feb. (Health Technology Assessment, No. 19.11.)

Cover of Clinical trial metadata: defining and extracting metadata on the design, conduct, results and costs of 125 randomised clinical trials funded by the National Institute for Health Research Health Technology Assessment programme

Clinical trial metadata: defining and extracting metadata on the design, conduct, results and costs of 125 randomised clinical trials funded by the National Institute for Health Research Health Technology Assessment programme.

Show details

Chapter 4Theme 1: meeting the needs of the NHS

This chapter considers questions linked to the theme: ‘How was the trial seen as meeting the needs of the NHS by the HTA programme?’ After a brief review of the relevant literature, it summarises available data on how trials funded by the HTA programme could answer questions about meeting the needs of the NHS. It explores how topics of trials were generated and prioritised. It also explores the outcomes used and the time from prioritisation to publication of the findings. The methods used to answer each question are described and the results are followed by analysis and discussion.

Introduction

Several terms may usefully be defined. By commissioned research, we mean research where the topics to be researched are defined by the programme and not by the researchers who do the work. This implies that the programme is acting on behalf of the NHS and must have mechanisms for ‘knowing’ the needs of the NHS. This differs from both response-mode research (the traditional mode, with funders taking bids from expert teams of researchers) and researcher-led research (HTA’s term for the work stream introduced in 2006 where proposals are submitted by researchers but are rigorously assessed against NHS need).1

To be relevant to decision-making in the NHS, any clinical trials would need to be pragmatic as opposed to explanatory. Pragmatic trials have been defined as those with broad inclusion criteria, carried out in many centres and with patient-relevant outcomes.32

To employ a term given prominence in the Cooksey Report (2006),33 NHS-funded research had to be restricted to public interest or market failure research, that is, work that the private sector would not be interested in carrying out. This is often due to the inability to patent that which is being tested (difficult outside new drugs or, in particular, interventions made up of services rather than tightly defined products).

As the HTA programme is a commissioned programme, one might expect it to prioritise research focused on the needs of the NHS. A substantial literature discusses methods for research prioritisation but there is much less on how potential topics should be identified, or on assessments of whether or not prioritised research is indeed ‘needs-led’.9,3437

Since its inception, the NHS research and development (R&D) has focused on identifying gaps in research relevant to the NHS and prioritising them. Setting priorities is difficult and complex, partly because there is ‘no agreed upon definition for successful priority setting, so . . . no way of knowing if an organisation achieves it’.38

Different methods have been suggested, such as multidisciplinary involvement, public and patient involvement, the use of scoring systems, the Delphi process and information specialist involvement. Economic impact approaches include the payback approach or expected value of information models. Priority setting means an allocation of limited resources, which can be highly political and controversial. Developing a structured topic prioritisation process helps address this challenge.

Chase et al.9 described the different sources used by the HTA programme in 1998 to identify potential priorities. Overall, there were 1100 suggestions for the programme from four main sources: (1) a widespread consultation of health-care commissioners, providers and patients; (2) research recommendations from systematic reviews; (3) reconsideration of previous research priorities; and (4) horizon scanning. Nearly half (46%) of final programme priorities were from the widespread consultation, with 20% from systematic reviews and 10% from each of the other two areas. (The remainder came simultaneously from more than one source.) Chase et al.9 concluded that there was value in having a mix of sources. One of the aims of this chapter was to apply the approach of Chase et al.9 to all the RCTs published to mid-2011.

A small literature discussed the patient relevance of outcomes in publications, through surveys of trials published in a particular disease area. There are three notable examples:

  • Gandhi et al.39 looked at diabetes trials and found that primary outcomes were patient important in only 78 of 436 RCTs (18%).
  • Montori et al.40 also looked at diabetes trials and found that primary outcomes were patient important in only 42 of 199 RCTs (21%).
  • Rahimi et al.41 looked at cardiovascular trials and found that primary outcomes were solely patient important (death, morbidity or patient-reported outcomes) in only 93 of 413 trials (23%).

Chalmers and Glasziou42 proposed a framework for considering avoidable waste in research, with four stages. The first concerned whether or not the questions addressed by research are relevant to clinicians and patients; if they are not, Chalmers and Glasziou42 argue that the research is wasted.

Although Chalmers and Glasziou42 give some examples of the ways in which research fails to address relevant questions, they provide no quantifiable measures of waste in this stage of the framework, unlike the other three stages (design, publication and useable report), for each of which empirical estimates of waste are provided.

The extent to which RCTs have been preceded by systematic reviews can indicate the source of the topic. A recent review of 48 trials funded by the HTA programme between 2006 and 2008 indicated that 80% had been preceded by a systematic review.43

Questions addressed

The questions on which data were extracted are shown in Box 2.

Box Icon

BOX 2

The research questions answered under this theme T1.1. Type of commissioning work stream?

Methods

Nine questions were piloted in theme 1 (hereafter T1). One question (‘How was the relevance to the NHS assessed?’) was deemed not feasible owing to lack of data. However, data were available on the work stream (commissioned or researcher led) (T1.1), whether or not a prior systematic review existed (T1.2) and the source of the topic (T1.3). These are explored below.

Denominators

For questions T1.1, T1.3 and T1.4 the denominator was the number of priority areas (n = 100) which precede any call for a trial. (Note: ‘T’ refers to theme. Each of the six themes are numbered with additional numerals referring to questions within that theme.) One hundred research suggestions/priority areas made it through to the commissioning brief stage, which led to 107 projects being funded containing 123 RCTs. The denominator for questions T1.2 and T1.5–T1.9 was the total number of projects (n = 109) (107 projects via the commissioned work stream and two projects via direct commissioning).

Results

Question T1.1: type of commissioning work stream

Out of the 100 priority areas, 107 (98.2%) projects were funded through the commissioned work stream. The other two projects were ‘directly commissioned’ {09/94/01 [head-to-head comparison of two H1N1 swine influenza vaccines in children aged 6 months to 12 years] and 99/01/01 [conventional ventilatory support versus extracorporeal membrane oxygenation for severe adult respiratory failure (CESAR)]}.

Question T1.2: prior systematic review

Of the 109 projects, 56% reported a prior systematic review in the published monograph.

Question T1.3: the source for topic identification

Of the four main sources of identification, ‘widespread consultation’ contributed 64 (66.7%) topics followed by systematic reviews (25%, 24/100) and the Horizon Scanning Centre (3%, 3/100).

The balance of these sources shifted over time. When the number of trials increased in 2001–2, the proportion of topics from systematic reviews rose to 65% (Table 5).

TABLE 5

TABLE 5

Source of commissioned topic by year

Question T1.4: type of Health Technology Assessment advisory panel

The source of topics varied by advisory panel (Table 6). Widespread consultation was the main commissioning source for two of the three panels (83.3%, 15/18 and 72.2%, 39/54, respectively). The exception was the pharmaceutical panel, where 50% (12/24) of the commissioned topics were from systematic reviews.

TABLE 6

TABLE 6

Number of topic suggestions by source of information and advisory panel

Question T1.5: what was the priority given by the programme to the research?

The programme prioritised 70% of projects in the top band. Of the 71 projects prioritised up to and including 1999, 50 (70.4%) were classified as A-list topics (‘recommended for commissioning – must commission’) and 18 (25.4%) were B-list topics (‘recommended for commissioning’ only). The HTA MIS database did not provide sufficient information for 4.2% of trials (3/71) (Table 7).

TABLE 7

TABLE 7

Summary data on the priority status of the research topic up to 1999

Question T1.6: did the ‘statement of need’ change?

This question asked if researchers undertaking research drifted from the programme’s initial assessment of NHS need for the research. The statement of need did not change between the commissioning brief and the monograph in 101 trials (94.4%, 101/107). No data were available for the remaining six projects. For these six projects, we were unable to compare the information reported in the commissioning brief with that reported in the monograph for three trials (2.8%). The reasons were ‘No commissioning brief or vignette was available’ (trials ID121 and ID122) and ‘No commissioning brief or vignette was prepared. It was a fast track topic’ (trial ID106). For the final three projects, we were unclear about the reporting of the statement and whether or not it changed from the advertisement to the executive summary in the monograph (trials ID60, ID79 and ID86). Owing to the complexity of data extracted to answer this question, it was not possible to analyse the data further. In addition, it was agreed that all data fields related to the ‘statement of change’ question would be dropped from further analyses.

Question T1.7: frequency and accuracy of reporting the primary outcomes

The 109 funded projects included 125 clinical trials. The main primary outcome, defined as that used for sample size calculation, was reviewed independently by two researchers (RM and AY) for the 109 projects. Four projects lacked requisite information (the monograph did not clearly state what the main primary outcome was nor was it possible to determine what the main primary outcome was during the data extraction process). In this instance, consensus was reached by both researchers reviewing the monograph (specifically, the sample size calculation section reported in the methods chapter of the monograph) to determine the actual type of primary outcome. It was not possible to accurately identify what the main primary outcome was for one project (trial ID68).

Seventy-eight (73%) of the 107 projects in the commissioned work stream reported sufficiently on the proposed primary outcome. Twenty-one projects reported limited information. Eight commissioning briefs (7.5%) contained no information about what the primary outcome was.

Question T1.8: adequate reporting of the proposed and published primary outcome

All projects were analysed to compare the proposed primary outcomes with those published (n = 109). We were able to classify the proposed primary outcome for 97 projects (88.9%) and the published primary outcome for 108 projects (99.1%) (Table 8); little changed between these two stages. Patient-important outcomes were reported in more than half of the HTA-commissioned projects, at both the proposed and published stages of the project (67%, 73/109 and 73.4%, 80/109, respectively). A number of outcomes could not be classified using the Gandhi et al.39 three main headings. Fourteen proposed primary outcomes and 18 published primary outcomes were categorised as ‘other’.

TABLE 8

TABLE 8

The commissioned, planned and actual primary outcomes

Thirteen projects (11.9%) had differences between the planned and actual type of primary outcome. These discrepancies were mainly due to a lack of information or having no information on the primary outcome in the planned documentation (proposal/protocol) (n = 12). The monograph was able to provide sufficient information for 10 of these projects to enable the primary outcome to be classified accordingly. Table 9 highlights where these discrepancies were between the planned and actual reporting of the primary outcome.

TABLE 9

TABLE 9

Discrepancies between the planned and actual primary outcome measure

When diagnostic and screening projects (n = 20) were excluded, patient-important outcomes increased from 67% (n = 73) to 73% (65/89).

Over the period 1993–2002, 82.7% (67/81) of reported primary outcomes were patient important (Table 10). The years 1993–2002 provide a more accurate report of the type of primary outcome reported in the monograph, as a number of projects funded during the period 2003–10 have not yet published.

TABLE 10

TABLE 10

Actual primary outcome as reported in the monograph by year of the topic advertisement (excluding diagnostic and screening projects)

Question T1.9: what was the time lag between prioritisation and publication of the monograph?

This question asked about the time taken between the programme prioritising a topic and the monograph publishing the results. The interval was 8–10 years (Table 11). The average was 8 years for trials prioritised in 1993 and 9 years for those prioritised in 1999.

TABLE 11

TABLE 11

The year the topic received its project application reference by the year in which the monograph was published

Analysis

In Chase et al.’s9 review, 46% of programme priorities in 1998 came from the widespread consultation and 20% from systematic reviews. Our data show greater reliance on consultation but with variation from year to year. The key question concerns what can be inferred about the importance of the HTA projects to the NHS. It would be a mistake to equate widespread consultation with NHS relevance and systematic reviews with academic interest. There is no reason why this should be so. The processes the HTA programme had in place between the identification of topics and their advertisement as commissioning briefs means that the initial topic only served as a starter for the real work on NHS relevance.

Unsurprisingly, most projects that were funded had been prioritised; 70% had been given the top band (A) by the programme’s prioritisation processes. Band A, ‘recommended for commissioning – must commission’, meant that the programme would ‘go the extra mile’ to ensure that research was funded in that area. What to make of this 70% figure? The priority banding was the end of a process that started with the source of the topic, addressed in the previous question. This process involved detailed consideration of potential research priorities by panels of NHS experts (patients, clinicians, managers) as well as an overarching standing group on health technologies, meeting annually for 2 days. The priority band was a summary score produced by the whole process. The process was producing research proposals of which 70% were thought to be of high relevance to the NHS and so of a high priority.

By contrast with the finding by Jones et al.43 that 80% of RCTs funded by the HTA programme and published between 2006 and 2008 were preceded by a systematic review, we found that 56% of all trial published to 2011 were preceded by a systematic review.

The finding that the statement of need did not change between the commissioning brief and that reported in the monograph in 101 out of 107 trials (94%) suggests no evidence of ‘drift’. Unfortunately, the data available in the database were not detailed enough to allow us to make further informative assessments in this area.

Primary outcomes tended to be patient relevant. Excluding those projects relating to ‘diagnostic technologies and screening’ increased these figures from 67% to 73%, much higher than previous studies (18% in Gandhi et al.,39 21% in Montori et al.40 and 23% in Rahimi et al.41).

The lag between the programme prioritising a topic and publishing the results in the monograph series was 8–10 years. As this measures the time to publication in the HTA journal and not to publication in any journal, it overestimates to some extent. The key question is choice of benchmark: what is the right length of time against which 8–10 years should be compared?

Discussion

Question T1.9 on the 8- to 10-year time lag from topic identification to monograph publication was striking. However, we were unable to find any comparable estimate in the literature.

Although the answers to most questions were largely as expected, these questions only relate to meeting NHS need in an oblique and indirect way. Data availability limited the questions that could be asked regarding the core aim of the HTA programme, that is, how well the research it funds aims to meet the needs of the NHS. This is something that the programme should consider how best to address.

Strengths and weaknesses of the study

Addressing this overall question based on NHS need was hampered more than other questions in this report by the limitations of the database before 2000. This is because so much of the needs-related information is captured at the very start of a project, rather than during or at its completion.

The strength of the work has been given a new focus by the work of Chalmers and Glasziou42 in highlighting avoidable waste in research. Their framework starts with posing questions that matter, something that is key to the HTA programme.

This project looked only at trials funded through the HTA programme’s commissioned work stream. Since 2006, the programme has developed a growing portfolio through its researcher-led work stream. Proposals for this work stream are also assessed in terms of NHS need.

Recommendations for future work

Any future work will need to take account of the data limitations on how the trials funded aimed to meet the needs of the NHS. Any future work should include seven of the questions explored in this chapter, five as is (T1.1, T1.3, T1.7, T1.8 and T1.9) and two to be amended (T1.2 and T1.4).

Unanswered questions and future research

We offer recommendations for any future similar analyses. We found it difficult to identify data that usefully, consistently or richly characterised the NHS need in these trials. This matters given the importance to the HTA programme of meeting (and being seen to meet) NHS need. We recommend that NETSCC should work with the HTA programme to develop trial metadata that more usefully, consistently and richly characterise NHS need (linked as appropriate to potential impact and reduced avoidable waste).

Copyright © Queen’s Printer and Controller of HMSO 2015. This work was produced by Raftery et al. under the terms of a commissioning contract issued by the Secretary of State for Health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.

Included under terms of UK Non-commercial Government License.

Bookshelf ID: NBK274339

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1.6M)

Other titles in this collection

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...