U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Reeves BC, Scott LJ, Taylor J, et al. The Effectiveness, cost-effectiveness and acceptability of Community versus Hospital Eye Service follow-up for patients with neovascular age-related macular degeneration with quiescent disease (ECHoES): a virtual randomised balanced incomplete block trial. Southampton (UK): NIHR Journals Library; 2016 Oct. (Health Technology Assessment, No. 20.80.)

Cover of The Effectiveness, cost-effectiveness and acceptability of Community versus Hospital Eye Service follow-up for patients with neovascular age-related macular degeneration with quiescent disease (ECHoES): a virtual randomised balanced incomplete block trial

The Effectiveness, cost-effectiveness and acceptability of Community versus Hospital Eye Service follow-up for patients with neovascular age-related macular degeneration with quiescent disease (ECHoES): a virtual randomised balanced incomplete block trial.

Show details

Chapter 2Methods

The results of the cost-effectiveness analysis have been published open access in BMJ Open.16

Study design

The ECHoES study is a non-inferiority trial designed to emulate a parallel-group design (Table 1). However, as all vignettes were reviewed by both optometrists and ophthalmologists in a randomised, balanced, incomplete block design,17,18 the ECHoES trial is more analogous to a crossover trial than to a parallel-group trial. This trial is registered as ISRCTN07479761.

TABLE 1

TABLE 1

Research question components tested in the ECHoES trial compared with the hypothetical parallel-group trial that it aimed to emulate

The trial aimed to quantify and compare the diagnostic accuracy of ophthalmologists and optometrists in assessing reactivation of quiescent nAMD lesions, compared with the reference standard (see Reference standard). This type of design was possible only with limited permutations of the total number of vignettes, participants and number of vignettes per participant. For the ECHoES trial, a total of 288 vignettes were created. Forty-eight ophthalmologists and 48 optometrists each assessed a sample of 42 vignettes. Each vignette was assessed by seven ophthalmologists and seven optometrists. Each sample of 42 vignettes was assessed in the same order by one optometrist and one ophthalmologist, both selected randomly from their cohorts.

Vignettes

A database of vignettes was created for the ECHoES trial using images collected in the IVAN trial (HTA reference: 07/36/01; ISRCTN 921665609), which included a large repository of fundus images and OCT images from eyes with varying levels of lesion activity. In the IVAN trial, OCT and fundus images were captured from 610 participants every 3 months for up to 2 years, generating a repository of almost 5000 sets of images, with associated clinical data. However, only a subset (estimated to be about 25% of all of the available OCT images) was captured using the newer-generation Fourier domain technology (now the clinical standard), which provides optimal images of the posterior ocular findings. The vignettes in the ECHoES trial were populated only with OCT images captured on spectral/Fourier domain systems.

Each vignette consisted of sets of retinal images (colour and OCT) at two time points (baseline and index), with accompanying clinical information (gender, age, smoking status and cardiac history) and best corrected visual acuity (BCVA) measurements obtained at both time points. The ‘baseline’ set were images from a study visit when the nAMD was deemed quiescent (i.e. all macular tissue compartments were fluid free) and the ‘index’ set consisted of images from another study visit. Considering both baseline and index images, and taking into account the available clinical and BCVA information, participants reviewed these vignettes and made classification decisions about whether the index lesion was believed to be reactivated, suspicious or quiescent. Further details are published elsewhere.19 A reference standard lesion classification was assigned to each vignette on the basis of independent assessment by three retinal experts (see Reference standard).

Participants

Recruitment

The ECHoES trial was publicised in optometry journals and forums to attract optometrists, and circulated to ophthalmologists who were members of the UK and Welsh medical retinal groups. Potential participants were directed to the ECHoES trial website, where they could read the information sheet and register their interest in the trial.

Eligibility criteria

Participants had to meet the following inclusion/exclusion criteria.

Ophthalmologists:

  • have 3 years’ post-registration experience in ophthalmology
  • have passed part 1 of the Royal College of Ophthalmologists examination or hold the Diploma in Ophthalmology or equivalent qualification
  • working in the NHS at the time of participation in the ECHoES trial
  • have experience within an age-related macular degeneration (AMD) service.

Optometrists:

  • be fully qualified and registered with the General Optical Council for at least 3 years
  • be practising within the General Optical Service at the time of participation in the ECHoES trial
  • must not be working within AMD shared care schemes or undertaking OCT interpretation within AMD care pathways.

There were also some practical circumstances in which a potential participant was not accepted to assess the main study vignette set:

  • unable to attend any of the webinar training sessions
  • unable to achieve an adequate standard (75%) with respect to the assessment of lesion activity status (i.e. reactivated, suspicious or quiescent) on the training set of vignettes.

Training participants

Both ophthalmologists and optometrists are qualified to detect retinal pathology, but optometrists (and some ophthalmologists) may not have the skills to assess fundus and OCT images for reactivation of nAMD. Therefore, the training was designed to provide the key information necessary to perform this task successfully, so that all participants had a similar level of background knowledge when starting their main trial vignette assessments. The training included two parts.

Webinar lectures

All participants were required to attend two mandatory webinar lectures. The first webinar covered the objectives of the EcHoES trial, its design, eligibility criteria for participation, outcomes of interest, and the background to detection and management of nAMD. The second webinar covered the detailed clinical features of active and inactive nAMD, the imaging modalities used to determine activity of the lesion and interpretation of the images. Each webinar lecture lasted approximately 1 hour, with an additional 15 minutes for questions.

Test of competence

After confirmation of attendance at the webinars, participants were allocated 24 training vignettes. In order to qualify for the main trial, participants had to assign the ‘correct’ activity status to at least 75% (18 of 24) of their allocated training vignettes, according to expert assessments (see Reference standard). If participants failed to reach this threshold, they were allocated a further 24 vignettes (second training set) to complete. If participants failed to reach the performance threshold for progressing to the main trial on their second set of training vignettes, they were withdrawn from the trial. Participants who successfully passed the training phase (after either one or two attempts) were allocated 42 vignettes for assessment in the main phase of the trial.

Training vignettes were randomly sampled from the same pool of 288 vignettes as those used in the main study. However, the sampling method ensured that participants assessed different vignettes in their main study phase from those assessed during their training phase; the samples for assessment in the main trial were allocated to participant IDs in advance (as part of the trial design) and training sets of vignettes were sampled randomly from the 246 remaining vignettes.

Reference standard

The reference standard was established based on the judgements of three medical retina experts. Using the web-based application (see Implementation/management/data collection), these experts independently assessed the vignette features and made lesion classification decisions for all 288 index images. As the judgements of experts did not always agree, a consensus meeting was held to review the subset of vignettes for which experts’ classifications of lesion status disagreed. The experts reviewed these vignettes together without reference to their previous assessments and reached a consensus agreement. This consensus decision (‘reactivated’, ‘suspicious’ or ‘quiescent’ lesion) for all 288 vignettes made up the reference standard and was used to determine ‘correct’ participant lesion classification decisions.

As described in the protocol,20 the classification of a lesion as reactivated or quiescent depended on the presence or absence of predefined lesion components and whether or not the lesion component had increased from baseline. Two imaging modalities were used. These were colour fundus (CF) photographs and OCT images. Lesion components in colour photographs that were prespecified comprised haemorrhage and exudate. Tomographic components comprised subretinal fluid (SRF), diffuse retinal thickening (DRT), localised intraretinal cysts (IRCs) and pigment epithelial detachment (PED). Experts indicated whether each lesion component was present or absent. Rules for the classification of a lesion as reactivated or quiescent from the assessment of lesion components were prespecified (Table 2). Experts could disagree about the presence or absence of a specific lesion component and whether or not these components had increased from the baseline; however, they had to follow the rules when classifying a lesion as reactivated or quiescent (note that the PED lesion component did not inform lesion classification as active or quiescent). Validation prompts reflecting these rules were added to the web application after the training phase to prevent data entry or keystroke errors; the validation rules did not assist participants in making the overall assessment because they had to enter their assessments of the relevant image features present in each vignette before classifying the lesion status.

TABLE 2

TABLE 2

Framework for reference classifications

Owing to the short duration of the trial, participant training and assessments were undertaken concurrently with the independent experts’ assessments of the vignettes. Therefore, training sets of vignettes were scored against experts’ assessments that were complete at the time. Subsequently, checks were instituted to ensure that no participant was excluded from the main trial who might have passed the threshold performance score had the consensus reference standard been available at the time.

Outcomes

Primary outcome

The primary outcome was correct classification of the activity status of the lesion by a participant, based on assessing the index images in a vignette, compared with the reference standard (see Reference standard). Activity status could be classified as ‘reactivated’, ‘suspicious’ or ‘quiescent’. For the primary outcome, a participant’s classification was scored as ‘correct’ if:

  • both the participant and the reference standard lesion classification were reactivated
  • both the participant and the reference standard lesion classification were quiescent
  • both the participant and the reference standard lesion classification were suspicious
  • either the participant or the reference standard lesion classification was suspicious, and the other classification (reference standard or participant) was quiescent.

In effect, for the primary outcome, suspicious and quiescent classifications were grouped, making the primary outcome binary (Table 3).

TABLE 3

TABLE 3

Definition of primary outcome

Secondary outcomes

  1. The frequency of potentially sight-threatening ‘serious’ errors. An error of this kind was considered to have occurred when a participant’s classification of a vignette was ‘lesion quiescent’ and the reference standard classification was ‘lesion reactivated’; that is, a definitive false-negative classification by the participant. Definitive false positives were not considered sight-threatening, but were tabulated. Misclassifications involving classifications of ‘lesion suspicious’ were also not considered sight-threatening.
  2. Judgements about the presence or absence of specific lesion components, for example blood and exudates in the fundus colour images, SRF, IRC, DRT and PED in the OCT images and, if present, whether or not these features had increased since baseline.
  3. Participant-rated confidence in their decisions about the primary outcome, on a 5-point scale.

Adverse events

This study does not involve any risks to the participants; therefore, it was not possible for clinical adverse events to be attributed to study-specific procedures.

Implementation/management/data collection

A secure web-based application was developed to allow participants to take part in the trial remotely (see example screenshots Appendix 2). The website www.echoestrial.org/demo shows how assessors carried out assessments in the trial. Participants registered interest, entered their details, completed questionnaires (regarding opinions on ECHoES trial training and shared care), and assessed their training and main study vignettes through this application. The webinar material was also available in the web application for participants to consult if they needed to revisit it.

Additionally, the web application had tools accessible only to the trial team to help manage and monitor the conduct and progression of the trial. Further details are published elsewhere.19

Sample size

With respect to the primary outcome, the trial was designed to answer the non-inferiority question ‘Is the performance of optometrists as good as that of ophthalmologists?’. A sample of 288 vignettes was chosen to have at least 90% power to test the hypothesis that the proportion of vignettes for which lesion status was correctly classified by the optometrist group. We assumed that the proportion of vignettes for which lesion status was correctly classified by the ophthalmologist group was at least 95% and that each vignette would be assessed by only one ophthalmologist and one optometrist. However, as each vignette was assessed seven times by each group, the trial in fact had 90% power to detect non-inferiority for lower proportions of vignettes correctly classified by the ophthalmologist group.

Statistical methods

The analysis population consisted of the 96 participants who completed the assessments of their training and main study samples of vignettes. Continuous variables were summarised by means and standard deviations (SDs), or by medians and interquartile ranges (IQRs) if distributions were skewed. Categorical data were summarised as a number and percentage. Baseline participant characteristics were described and groups formally compared using t-tests, Mann–Whitney U-tests, chi-squared tests or Fisher’s exact tests as appropriate.

Group comparisons

All primary and secondary outcomes were analysed using mixed-effects regression models, adjusting for the order the vignettes were viewed as a fixed effect (tertiles: 1–14, 15–28, 29–42) and participant and vignette as random effects. All outcomes were binary and as such were analysed using logistic regression, with group estimates presented as odds ratios (ORs) with 95% confidence intervals (CIs).

In addition to the group comparisons, the influence of key vignette features [age, gender, smoking status, cardiac history (including angina, myocardial infarction and/or heart disease), and baseline and index BCVA (modelled as the sum and difference of BCVA at the two time points)] on the number of incorrect vignette classifications were investigated, adjusting for the reference standard classification (reactivated vs. quiescent/suspicious). This additional analysis was carried out using fixed-effects Poisson regression with the outcome of the number of incorrect classifications. The prior hypothesis was that this information would not influence the number of correct (or incorrect) classifications.

The sensitivity and specificity of the primary outcome are also presented. For these performance measures, the sensitivity is the proportion of lesions for which the reference standard is ‘reactivated’ and participants correctly classified the lesion. The specificity is the proportion of lesions for which the reference standard is either ‘suspicious’ or ‘quiescent’ and the participant’s classification is also ‘suspicious’ or ‘quiescent’.

Non-inferiority limit

For the sample size calculation, it was agreed that an absolute difference of 10% would be the maximum acceptable difference between the two groups, assuming that ophthalmologists would correctly assess 95% of their vignettes. As the group comparison of the primary outcome was analysed using logistic regression and presented as an OR, this non-inferiority margin was converted to the odds scale. Therefore, this limit was expressed as an OR of 0.298 [i.e. the odds correct for the worst acceptable performance by optometrists (85%) divided by the odds correct for worst assumed performance of ophthalmologists (95%): (0.85/0.15)/(0.95/0.05)].

Sensitivity analysis

A sensitivity analysis of the primary outcome, regrouping vignettes graded as suspicious into the ‘lesion reactivated’ group rather than ‘quiescent lesion’ group, was undertaken to assess the sensitivity of the conclusions to the classification of the vignettes graded as suspicious. This analysis was prespecified in the analysis plan but not in the trial protocol.

Post-hoc analysis

The following post-hoc analyses were prespecified in the analysis plan, but not in the trial protocol.

  • Lesion classification decisions were tabulated against referral decisions for both groups.
  • A descriptive analysis of the time taken to complete each vignette and how the duration of this time changed with experience in the trial (learning curve) was performed. The relationship between the duration of this time and participants’ ‘success’ in correctly classifying vignettes was also explored.
  • Cross-tabulations and kappa statistics were used to compare experts’ initial classifications with the final reference standard. Similarly, cross-tabulations and kappa statistics were used to compare lesion component classifications between the three experts.

In addition, a descriptive analysis of the participants’ opinions about the training provided in the ECHoES trial, as well as their perceptions of shared care, was carried out (see ECHoES participants’ perspectives of training and shared care).

Missing data

By design, there were no missing data for the primary and secondary outcomes. However, time taken to complete vignettes was calculated as the time between when each vignette was saved on the database. It was, therefore, not possible to calculate this for the first vignette of each session. Additionally, it was assumed that times longer than 20 minutes (database timeout time) were due to interruptions, so these were set to missing. As the analysis using these times was descriptive and was not specified in the protocol, complete case analysis was performed and missing/available data described.

Statistical significance

For hypothesis tests, two-tailed p-values of < 0.05 were considered statistically significant. Likelihood ratio tests were used in preference to Wald tests for hypothesis testing.

All data management and all analyses carried out by the Clinical Trials and Evaluation Unit, Bristol, were performed in SAS version 9.3 (SAS Institute Inc., Cary, NC, USA) or Stata version 13.1 (StataCorp LP, College Station, TX, USA).

Changes since commencement of study

At the end of their participation in the trial, all participants were asked to complete a questionnaire regarding their opinions about the training provided in the ECHoES trial and their attitudes towards a shared care model. This was made available to all ECHoES trial participants.

In addition, the reference standard was originally planned to include only two categories, namely lesion reactivated or lesion quiescent. However, after a concordance exercise was carried out by the three retinal experts on the ECHoES trial team, a third category of suspicious lesion was added. This change occurred before any experts’ or participants’ lesion classifications were made.

Finally, after much consideration, it was agreed the primary analysis would be carried out using logistic regression rather than Poisson regression in order to fully account for the incomplete block design. Fitting a Poisson model would not have allowed us to include both vignette and participant as random effects in the model.

Health economics

Aims and research questions

The economic evaluation component of the ECHoES trial aimed to estimate the incremental cost and incremental cost-effectiveness of community optometrists compared with hospital-based ophthalmologists performing retreatment assessments for patients with quiescent nAMD. This would enable us to determine which professional group represents the best use of scarce NHS resources in this context. The main outcome measure was a cost per correct retreatment decision, with correct meaning that both experts and trial participants judge the vignettes to be lesion reactivated, lesion quiescent or lesion suspicious.

Analysis perspective

The economic evaluation took the cost perspective of the UK NHS, personal social services and private practice optometrists, and was performed in accordance with established guidelines and recent publications.2123 Although it is possible that any incorrect retreatment decisions (false positives) could lead to costs being incurred by patients, their families or employers because of time away from usual activities, these wider societal costs were felt to be small compared with the implications for the NHS. Therefore, in line with the IVAN trial9, we decided not to adopt a societal perspective in the ECHoES trial.

Economic evaluation methods

The methods used in the economic evaluation are summarised in Table 4. Data on resource use and costs were collected using bespoke costing questionnaires for community optometrists. With the help of the ECHoES trial team (clinicians and optometrists), we first identified the typical components/tasks which are included in a monitoring review for nAMD and then subcategorized these tasks into resource groups such as staffing, equipment and building space. As some optometrist practices are likely to incur set-up costs associated with assessing the need for retreatment, we compiled a list of items of equipment necessary for performing each task within a monitoring review. The resource-use and cost questionnaire asked each participant which items of equipment from this list their practice currently owned and how much they had paid for the items. The questionnaire was piloted with a handful of optometrists and ophthalmologists who were able to advise on whether or not our questionnaire was straightforward to comprehend and complete. For any items which would be necessary for the review, but which the practice did not already own, we inferred that these items would have to be purchased ‘ex novo’. Once the total costs of equipment were identified, along with their predicted life span as suggested by study clinicians, we created an equivalent annual cost for all of the equipment using a 3.5% discount rate,24 and divided this cost by the number of potential patients who could undergo the monitoring reviews in the community practices. See Appendix 3 for a copy of the bespoke costing questionnaires for the optometrists in the ECHoES trial.

TABLE 4

TABLE 4

Summary of methods used in the economic evaluation

We estimated the costs associated with training optometrists to perform the assessments. The costs were estimated for the 2-hour duration webinar, plus time spent to revisit the webinar and time spent in checking other resources. Finally, the amount of time the ECHoES study clinicians spent preparing and delivering the webinars was also estimated.

For information on the costs of ophthalmologists performing the monitoring assessments, we used cost data from the IVAN trial, in which we had undertaken a very detailed micro-costing study.25

In the economic evaluation, it is advised that value-added tax (VAT) should be excluded from the analysis.21 Where possible, we used costs without VAT for our base-case analysis.

Missing data

Only 40 of the 61 optometrists who were initially invited to complete the health economics questionnaire (those who completed webinar training) actually submitted the questionnaire; 34 of the 40 completed their main assessments for the trial. Fifty-five of the 61 optometrists who were invited to complete the health economics questionnaire replied to the feedback questionnaire, which also contained information on the time spent on training by the optometrist. This training time was costed and contributes to the calculation of the cost per monitoring review for optometrists. Therefore, in order to maximise the use of questionnaire replies, a cost per monitoring review was estimated using all available information as reported by all optometrists rather than focusing only on the 48 that subsequently completed the main assessments.

For consistency with the procedure adopted in the IVAN trial costing, mean values of the relevant variables were imputed to the 21 optometrists who did not submit the cost questionnaire and to the six who did not complete the feedback questionnaire. The estimated costs from the 61 practices were randomly assigned (see Cost model: random, allocation) to each of the 2016 vignettes assessed by the 48 optometrists who completed the main assignment.

Cost model: random, allocation

From the IVAN trial, we had estimates of the cost of clinician-led monitoring reviews from 28 eye hospitals. Using a random allocation procedure similar to the one used in the IVAN trial,9 these estimates were allocated to each of the 2016 vignettes assessments carried out by the 48 ophthalmologists in the ECHoES trial. From the data collected for the ECHoES trial, we had estimates of the cost of the monitoring review from 61 (after imputation) optometric practices, which varied across practices. Although it would have been possible to link each vignette to the cost derived for the optometry practice in which the optometrist assessing the vignette worked, this would have ignored the heterogeneity between practices and the uncertainty around the estimates of the mean cost of the monitoring review. Therefore, we adopted the same approach as used for the IVAN trial, which was to randomly allocate costs. We randomly sampled from the distribution of costs from different optometrist practices using the procedure described below. Each of the 61 practices (for which the cost of a monitoring review had been estimated) reported the monthly number of nAMD patients that they could accommodate after all required changes in the practice (purchase of necessary equipment, changes to the structure/size of the practice, new staff, etc.) would have been implemented. These numbers were used to calculate weights; that is, each weight was the number of nAMD patients that the practice could accommodate per month divided by the summation of all patients with quiescent nAMD who could theoretically be accommodated by all of the practices per month after implementing changes. Cumulative weights were then calculated and assigned to each optometric practice. For each vignette (potential patient) a random number was generated. This random number determined which of the monitoring review costs each vignette was randomly assigned to. For each vignette assessed by a community optometrist, we randomly drew a value for the cost of the community optometry review from the distribution of the ECHoES trial monitoring review costs; for each vignette assessed by an ophthalmologist, we randomly drew a value for the cost of a hospital review from the cost distribution used in IVAN. For those vignettes where an incorrect treatment decision resulted in an additional hospital monitoring consultation or an unnecessary injection, we drew additional random numbers to sample those consultation costs from the distribution of costs as reported for the IVAN trial.

Care cost pathway decision tree

In order to generate an estimate for a cost for a correct retreatment decision, we first mapped the various care pathways that would possibly happen from the treatment assessments in the study, by comparing the reference lesion classification for each vignette to the actual classification made by the study participants (48 ophthalmologists and 48 optometrists). This gave rise to two versions of a simple decision tree: one for ophthalmologists (Figure 1) and one for optometrists (Figure 2). The starting point for the decision trees was the ‘actual’ truth. The three main branches of the trees were true active, true quiescent and true suspicious. Participants’ main trial assessments provided information about the number of participants who made correct and incorrect decisions on treatment compared with the reference standard decisions.

FIGURE 1. Decision tree for hospital ophthalmologist review.

FIGURE 1

Decision tree for hospital ophthalmologist review. Reproduced open access from Violato et al. under the terms of the Creative Commons Attribution (CC BY 4.0) licence.

FIGURE 2. Decision tree for community optometrist review.

FIGURE 2

Decision tree for community optometrist review. Reproduced open access from Violato et al. under the terms of the Creative Commons Attribution (CC BY 4.0) licence.

The decisions that both groups made about the vignettes were then placed into the decision trees and the associated costs for the different pathways were calculated. This process generated an average cost for each of the alternative care pathways. Any ‘incorrect’ decisions implied that patients would have had unnecessary repeat monitoring visits at the optometrist practice or at the hospital and maybe unnecessary anti-VEGF injections if they misclassified patients as requiring treatment. All costs are reported in 2013/14 prices unless specified otherwise.

Our baseline analysis calculated the average cost and outcome on a per patient basis. From these estimates, the incremental cost-effectiveness ratios (ICERs) for the different assessment options were derived, producing an incremental cost per accurate retreatment decision. Sensitivity analyses were carried out to demonstrate the impact of variation around the key parameters in the analysis on the baseline cost-effectiveness results. The four sensitivity analyses which we conducted were:

  • Sensitivity analysis 1: all patients initiating treatment were assumed to have a course of three ranibizumab given at three subsequent injection consultations, with no additional monitoring reviews. This matched the way in which discontinuous treatment was administered in the IVAN trial.
  • Sensitivity analysis 2: treatment was assumed to be one aflibercept injection given during an injection consultation.
  • Sensitivity analysis 3: treatment consisted of one bevacizumab injection given during an injection consultation.
  • Sensitivity analysis 4: only considered the cost of a monitoring review rather than considering the cost of the whole pathway.

The economic evaluation results were expressed in terms of a cost-effectiveness acceptability curve (CEAC). This indicates the likelihood that the results fall below a given cost-effectiveness ceiling and could help decision makers to assess whether optometrists are likely to represent value for money for the NHS when compared with ophthalmologists in making decisions about retreatment in nAMD patients and also completely disparate health-care interventions. All health economics analyses were conducted in Stata version 12.1 (StataCorp LP, College Station, TX, USA), except the budget impact analysis.

Budget impact

Freeing up HES clinic time could lead to an increase in the overall capacity of the NHS both to manage the nAMD population more effectively and to manage non-nAMD eye patients (more time for ophthalmologists to spend with non-nAMD patients if they are seeing fewer nAMD patients for monitoring). Therefore, we attempted to estimate the potential time and costs that could be saved by HES clinics if some of the management of nAMD patients could be undertaken in the community. This was done by using the resource-use information collected during the IVAN trial as a basis on which to consider what proportion of an ophthalmologist’s time is spent on retreatment decisions for this group of patients relative to other aspects of their care. We brought together data from the IVAN trial (average number of patients attending a clinic) and information from the literature, and used expert opinion from the ECHoES study ophthalmologists and optometrists to try to estimate the total number of patients with quiescent or no lesion in both eyes who would be eligible for monitoring by a community optometrist in a given month and the total number of monitoring visits that could be transferred from a hospital to the community per year. We then replaced the costs of the ophthalmologist’s time with the cost of the optometrist’s time and examined the difference. The Microsoft Excel® spreadsheet (Microsoft Corporation, Redmond, WA, USA) calculations for the budget impact are reported in Appendix 3.

Qualitative research

Focus groups were conducted with ophthalmologists, optometrists and eye service users separately, and one-to-one interviews were conducted with other health professionals involved in the care and services of those with eye conditions.

Recruitment

Focus groups with optometrists and ophthalmologists

Two focus groups were conducted with ophthalmologists and optometrists separately, held at specialist conferences (the National Optical Conference for Optometrists in Birmingham, November 2013, and the Elizabeth Thomas Seminar for Ophthalmologists in Nottingham, December 2013). For the optometrist focus groups, information about the study was placed in specialist press and the ECHoES study website. Focus group participants were also recruited by the snowball technique, with health-care professionals informing other potentially interested colleagues. For the ophthalmologist focus groups, conference organisers emailed information about the study to delegates. Interested participants were asked to contact the qualitative researcher for more information.

Interviews with commissioners, clinical advisors to clinical commissioning groups and public health representatives

Clinical commissioning groups (CCGs) in England were contacted by an e-mail which contained information about the study, asking it to be forwarded on to general practitioners (GPs)/commissioners in each CCG who were responsible for commissioning ophthalmology care. Those who were interested were asked to contact the qualitative researcher for more information. Subsequent selection of clinical advisors and public health representatives was guided via the snowball technique. Interviews were mostly conducted in person (n = 6) although, when it was not practicable to do so, a telephone interview was conducted (n = 4). Interviews were conducted between March and June 2014.

Focus groups with service users

Participants with nAMD were recruited from local support groups organised by the Macular Society, formerly known as the Macular Disease Society, a UK-based charity for anyone affected by central vision loss with over 15,000 members and 270 local support groups around the UK (www.macularsociety.org). Three support groups in the south-west of England were initially selected, one based in a major city, one in a large town and one in a rural village. Service users with any history of nAMD were invited to join the study (regardless of whether they had nAMD in one eye or both, dry AMD in their other eye, were currently receiving or had had any treatment in the past).The researcher attended the local support meetings to explain about the research and provide attendees with a participant information leaflet to take home. Contact details of those potentially interested at this stage were obtained. Those who expressed an interest were telephoned by the researcher a week later to discuss the study further and to answer any questions to help them decide whether or not to take part.

Sampling

Basic demographic, health and professional information was collected from those who agreed to be contacted for sampling purposes. A purposeful sampling strategy was used to ensure that the feasibility and acceptability of the proposed shared model of care for nAMD was captured from a range of perspectives. Within this sampling approach, maximum variation was sought in relation to profession, age, gender and geographic location (for health professionals) and gender, age, type of AMD and time since diagnosis (for service users). Participant characteristics were assessed as the study progressed, and individuals or groups that were under-represented were targeted (i.e. commissioners or clinical advisors). Where it was felt that variation had been achieved, potential participants were thanked for their interest and informed that sufficient numbers had been recruited (i.e. optometrists).

Data collection

A favourable ethical approval for this study was granted by a UK NHS Research Ethics Committee. Written consent was obtained from participants at the start of each focus group or interview. Separate topic guides were developed for service users, optometrists/ophthalmologists (with additional questions for each professional group) and all other health professionals, to ensure that discussions within each group covered the same basic issues but with sufficient flexibility to allow new issues of importance to the informants to emerge. These were based on the study aims, relevant literature and feedback from eye-health professionals in the ECHoES study team, and consisted of open-ended questions about the current model of care for nAMD and perspectives of stable patients being monitored in the community by optometrists. They were adapted as analysis progressed to enable exploration of emerging themes.

All of the focus groups and interviews were conducted by DT; these were predominantly led by the participants themselves, with DT flexibly guiding the discussion by occasionally probing for more information, clarifying any ambiguous statements, encouraging the discussion to stay on track, and, in the focus groups, providing an opportunity for all participants to contribute to the discussion. All participants were offered a £20 gift voucher to thank them for their time (two commissioners declined the voucher).

Data analysis

Focus groups and interviews were transcribed verbatim and checked against the audio-recording for accuracy. Transcripts were imported into NVivo version 10 (QSR International, Warrington, UK), where data were systematically assigned codes and analysed thematically using constant comparison methods derived from grounded theory methodology.26 Transcripts were repeatedly read and ongoing potential ideas and coding schemes were noted at every stage of analysis. To ensure each transcript was given equal attention in the coding process, data were analysed sentence by sentence and interesting features were coded. Clusters of related codes were then organised into potential themes, and emerging themes and codes within transcripts and across the data set were then compared with explore shared or disparate views among participants. The transcripts were reread to ensure the proposed themes adequately covered the coded extracts, and the themes were refined accordingly. Emerging themes were discussed with a second experienced social scientist (NM) with reference to the raw data. Data collection and analysis proceeded in parallel, with emerging findings informing further sampling and data collection. Data collection and analysis continued until the point of data saturation, that is, the point at which no new themes emerged.

The ECHoES trial participants’ perspectives of training and shared care

In addition to the information collected during focus groups and interviews, the opinions of all optometrists and ophthalmologists who had taken part in the ECHoES trial were sought in a short online questionnaire. The survey related to participants’ experiences of the training and their attitudes towards shared care for nAMD. Questions required a binary, a Likert scale or a free-text response.

Quantitative data were analysed in Microsoft Excel and presented as proportions. Free-text responses were imported into NVivo and coded into categories, which were derived from the main survey questions, relating to feedback on the training programme (experiences of using the web application, ease of training and additional resources used) and attitudes towards shared care (general perspectives of the proposed model, and perceived facilitators and barriers to implementation). Data were analysed thematically using the constant comparative techniques of grounded theory, whereby codes within and across the data set were compared to look for shared or disparate views among optometrists and ophthalmologists.

Copyright © Queen’s Printer and Controller of HMSO 2016. This work was produced by Reeves et al. under the terms of a commissioning contract issued by the Secretary of State for Health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.

Included under terms of UK Non-commercial Government License.

Bookshelf ID: NBK395608

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (15M)

Other titles in this collection

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...