Article Text

Download PDFPDF

Development and pilot of clinical performance indicators for English ambulance services
  1. A Niroshan Siriwardena1,2,
  2. Deborah Shaw1,
  3. Rachael Donohoe3,
  4. Sarah Black4,
  5. John Stephenson1
  1. 1East Midlands Ambulance Service NHS Trust, Nottingham, UK
  2. 2Primary Care, University of Lincoln, Lincoln, UK
  3. 3London Ambulance Service NHS Trust, London, UK
  4. 4South Western Ambulance Service NHS Trust, Exeter, UK
  1. Correspondence to Professor A Niroshan Siriwardena, Faculty of Health, Life & Social Sciences, University of Lincoln, Brayford Pool, Lincoln LN6 7TS, UK; nsiriwardena{at}lincoln.ac.uk

Abstract

Introduction There is a compelling need to develop clinical performance indicators for ambulance services in order to move from indicators based primarily on response times and in light of the changing clinical demands on services. We report on progress on the national pilot of clinical performance indicators for English ambulance services.

Method Clinical performance indicators were developed in five clinical areas: acute myocardial infarction, cardiac arrest, stroke (including transient ischaemic attack), asthma and hypoglycaemia. These were determined on the basis of common acute conditions presenting to ambulance services and in line with a previously published framework. Indicators were piloted by ambulance services in England and results were presented in tables and graphically using funnel (statistical process control) plots.

Results Progress for developing, agreeing and piloting of indicators has been rapid, from initial agreement in May 2007 to completion of the pilot phase by the end of March 2008. The results of benchmarking of indicators are shown. The pilot has informed services in deciding the focus of their improvement programme in 2008–2009 and indicators have been adopted for national performance assessment of standards of prehospital care.

Conclusion The pilot will provide the basis for further development of clinical indicators, benchmarking of performance and implementation of specific evidence-based interventions to improve care in areas identified for improvement. A national performance improvement registry will enable evaluation and sharing of effective improvement methods as well as increasing stakeholder and public access to information on the quality of care provided by ambulance services.

  • Prehospital
  • ambulance
  • paramedic
  • performance indicator
  • quality improvement
  • audit
  • clinical assessment
  • effectiveness
  • emergency ambulance systems
  • emergency care systems
  • major incidents
  • clinical care

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Clinical performance indicators are increasingly being used in healthcare to assess and improve services, including in emergency1 and prehospital settings.2 A performance indicator is an assessment tool used to monitor and evaluate important governance, management, clinical and support functions that affect patient outcomes.3 Healthcare quality is, ‘the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge’.4

Clinical performance indicators for ambulance services have previously focused primarily on emergency response times (8 and 19 min), which have not been based on strong evidence,5 and as a result may have led to poor morale, adverse outcomes for patients6 through slower access to definitive care7 as well as other opportunity costs.8 There are few validated clinical measures of effectiveness and quality in prehospital care that have been used nationally,9 partly due to the absence of a clear and agreed process for their development. A recent Delphi study of key informants has showed that the development of new performance measures other than response times is the highest priority for prehospital research.10

There is, therefore, a compelling argument to develop clinical performance indicators for ambulance services in order to move from indicators based primarily on response times and in light of changing clinical demands on, and the transformation agenda of, ambulance services.11

There has been little work to date to develop meaningful clinical performance indicators for ambulance services. For indicators to be meaningful, they should be measurable and realistic, aiming to address issues that matter to patients and clinicians, to benchmark performance, to reduce variations within and between health services and to bring about improvements in care for patients and users. Indicators should function as part of a planned clinical quality improvement framework that draws on modern improvement principles, methods, tools and techniques; they should be designed to provide safe, effective, patient-centred, timely, efficient and equitable healthcare. Importantly, indicators should support clinicians and services in providing better care to their patients and to deliver the aims of quality improvement.12

Clinical performance indicators are usually based on rates measured in defined populations or significant (critical) incidents. Indicators can measure structures, processes or outcomes of health care.13 Although process measures are more sensitive to the quality of care,14 intermediate outcomes that are process measures that are known to have an effect on the true outcome, for example, aspirin or thrombolysis in acute myocardial infarction (AMI), are appropriate and often superior to simple process measures, for example, electrocardiographic recording in AMI.

We aimed to develop, pilot and report on the progress of the clinical indicators in order to utilise them to facilitate the quality assessment and quality improvement process for ambulance services.

Method

The development and pilot of indicators involved all English ambulance services and took place between May 2007 and March 2008. The principles agreed for development of ambulance clinical performance indicators were based on published recommendations.15 16

It was agreed that they should be developed in line with best evidence, in partnership with clinicians and service users, and linked to national structures for knowledge and evidence, clinical expertise and research and development. Their development was guided by a written protocol that stressed a number of key principles,12 including the strength of the link between process and outcomes, availability of routine measurement data, opportunity for improvement and applicability to the population under consideration.

The indicators were developed in five clinical areas: AMI, cardiac arrest, stroke (including transient ischaemic attack), asthma and hypoglycaemia. These were determined on the basis of common conditions presenting to ambulance services and agreed through expert consensus by directors of clinical care for ambulance services as having high impact (high incidence, admission to hospital or cost), potential for improved outcomes and amenable to comparative performance between services.

Further development and refinement of specific indicators took place through discussion at two meetings held in May and October 2007 by a subgroup of audit leads from ambulance services leading to the construction of twenty clinical performance indicators. The measures were based on existing evidence-based guidance derived from consensus and primary evidence including the National Institute of Health and Clinical Excellence and the Joint Royal Colleges Ambulance Liaison Committee.17 We based indicators on existing nomenclature, such as the Utstein template for cardiac arrest, where this was available,18 and thus relied on well-established guidance rather than new or emerging evidence. An example of the data collection table, which details where trusts gathered the evidence is summarised in table 1.

Table 1

Ambulance clinical performance indicator pilot: indicator set

The process agreed for sampling was that each trust would search through their database of clinical records (electronically or manually), estimating the number of cases of the clinical condition per month and the total number of emergency cases presenting. The population denominator for each criterion was the difference between number of cases reviewed in the audit and exceptions to the criterion. A pragmatic approach was agreed such that, while most trusts relied on manual systems, random sampling was preferred rather than attempting to process and analyse large datasets from trusts.

A random sample of 300 records (or up to 300 if there were less than this) for the clinical condition was then examined for each performance indicator, agreed exclusions, time taken to collect the data, whether this was electronic or manual (or both) and the person-hours required. Data were entered using specifically designed templates. In order to assess the estimated total number of calls relating to each condition as a proportion of the total calls received across the whole of each service, each audit requested an estimated number of cases per month and an estimated number of 999 or doctor urgent calls for the month.

Data were collected from each ambulance trust coordinated through East Midlands Ambulance Service NHS Trust. The data were collated and tabulated using Excel and precision of results were expressed as p±(1.96×SE of p) where p=rate, n=number of cases in the sample, standard error (SE)=√(p(1−p)/n. Institutional performance was analysed and compared using funnel plots,19 which have the advantage of avoiding inappropriate ranking but demonstrating outliers outside binomial control limits calculated at three standard deviations (99.9%) above and below the mean.20

Results

All 11 English ambulance services participated in development of the clinical performance indicators; 20 indicators in five domains were selected (table 1). The indicators were based broadly on the concept of review criteria that are ‘systematically developed statements to assess the appropriateness of specific healthcare decisions, services and outcomes’21 in the clinical setting of calls to ambulance services attended by a paramedic or emergency medical technician.

All services submitted data for the pilot evaluation. A pragmatic approach was taken in the pilot around how services collected data but information was presented on this with the aim of making data collection more consistent in future.

Data were collected over a 1-month period for indicators in each domain and presented as tables (table 2) and control charts (figure 1) for 20 indicators in five domains.

Table 2

Performance for administration of aspirin to patients with suspected ST elevation myocardial infarction

Figure 1

Control chart showing performance for administration of aspirin to patients with suspected ST elevation myocardial infarction.

The centre line on the control chart shows the mean of the underlying data and the outer curved lines delineate the upper and lower control limits, which take into account the ‘common cause’ (natural) variation in the indicator being measured across services as well as potential variation due to differences in numbers of cases. The difference in samples sizes between services was due to different caseloads and variation in the number of cases presenting in the month of data collection but this was corrected for in the type of analysis used. The control limits account for over 99.9% of the data and, therefore, performance for most trusts fall within these limits; where they do not this indicates ‘special cause variation’ for which an explanation should be looked for, such as differences in data quality or organisational systems affecting actual performance, so that recommendations for improvement can be made.

The analysis of the pilot audit data received also revealed a number of issues around case definition and data collection; some were general (Box 1) while others related to specific criteria.

Box 1 General comments based on feedback from pilot

  • Inclusion and exclusion criteria need to be explicit so as not to lend themselves to misinterpretation.

  • Criteria should be reviewed in light of comments raised during the pilot.

  • Exclusion criteria for each audit need clarification to ensure that they are explicit and include omissions highlighted during pilot.

  • Additional exclusion criteria should be submitted to the national group and agreed or rejected so that all services are collecting the same information.

  • Where audits are carried out electronically some manual verification of data may be needed; for example, where information is recorded in free text areas of patient clinical records.

  • Sampling strategies need to be agreed and consistent so that data collection processes and data quality are comparable.

  • Sample sizes should be reviewed for each audit in light of information from the pilot.

  • Although SPC charts are not as dependent on equivalent quantities of data being submitted as other benchmarking methods, large variations in the data supplied will affect control limits. As far as possible, trusts should endeavour to supply the amount of data requested in the sample size, rather than fewer or more, so that comparisons are, as far as possible, like for like. The sample sizes for each audit may need amending in the light of the information gathered during the pilot.

  • It is recognised that difficulties in gathering data may have been due to the pilots being introduced at fairly short notice and after audit plans had been drawn up. It is assumed that this will ease in the future as CPIs are integrated into audit plans and additional resources are provided for this work. Where trusts are having difficulty in accessing data within the relevant timescales it would be useful if any solutions they came up with were shared across the organisations. This will be easier once the online tools and website have been developed.

Discussion

This is the first published account of the development and testing of nationally agreed clinical performance indicators for ambulance services. The project involved all ambulance services and was supported by chief executives, directors of clinical care and audit leads of English ambulance services. Twenty indicators were developed and piloted in five clinical domains using a systematic but pragmatic approach. Performance was measured in systematic samples of cases taken over a month for each domain. Comparative performance was benchmarked using control charts.

The strength of this project was the full participation of all services in development and testing of the indicators. The Healthcare Commission included submission to the pilot as part of the new standards for care in 2007/2008.22 Several problems were identified with the definition of criteria, exceptions and data collection processes but these were recognised and corrected, which was part of the purpose of the pilot.

Other models have been developed for emergency medical system indicators23 but they have not, to our knowledge, covered a range of clinical conditions or been piloted or tested on a national scale as these have been. The indicators were chosen to evaluate the performance of paramedics or emergency technicians in the clinical setting of an ambulance service response to an emergency call. The development of the clinical indicators has been based on a philosophy of accountability and quality improvement— in line with similar developments in other health sectors in the UK and internationally.24 The indicators continue to be developed and refined and a second phase of development will include important areas such as trauma.25

The validity and usefulness of the indicators will ultimately be determined by stakeholder perceptions of value, action taken by trusts to implement changes in care and the impact of such change in terms of improvements in quality of care. As part of this process, ambulance trusts have agreed to identify clinical areas for improvement, informed by comparative plots (showing institutional variation in performance across services) indicating clinical areas where they appear to be outliers in performance, considered together with professional and patient priorities for improvement. Ambulance services will have an opportunity to set agreed national targets as well as specific improvement targets for themselves for their chosen clinical area.

Quality improvement, rather than focusing simply on headline indicators, will need to fundamentally consider how to improve processes and pathways of care within the clinical areas selected for improvement, focussing on the system of care rather than individual components. This will involve leadership and a move to a quality culture involving front-line staff in improving care based on implementing evidence and testing this using ‘plan-do-study-act’ cycles, process mapping, reminders, feedback and other evidence-based methods.26

In order to increase availability of information to services and eventually the general public, a National Ambulance Clinical Performance Indicator Registry is being developed to support quality improvement using a web-based technology platform.

It is envisaged that the registry will enable further development of clinical indicators, benchmarking of performance, identification and implementation of specific evidence-based interventions to improve care in areas identified for improvement, and evaluation and sharing of effective improvement methods as well as increasing stakeholder and public access to information on the quality of care provided by ambulance services.

In future, electronic clinical records will enable harmonisation of datasets and data collection to support collection of data for national as well as local clinical quality improvement. The chosen clinical indicators will be measured more frequently, charted and fed back to services using statistical process control methods to show change over time in processes or outcomes of care. Electronic clinical records also provide the possibility of reducing institutional barriers and improving continuity of care across different sectors of healthcare services as well as gathering data on true outcomes of care as primary care, hospital and ambulance services are able to share clinical information.27

This initiative offers the potential for ambulance services to identify clinical areas for improvement, to set clear aims and targets appropriate for their own organisation and to implement evidence-based methods for improvement. This should include front-line staff, managers and directors involving a system-wide and whole-service approach giving services the potential to demonstrate significant and enduring increases in the quality of care they provide to patients.

Acknowledgments

We would like to acknowledge the support of the Clinical Performance Indicator Subgroup of the National Ambulance Clinical Audit Steering Group, which included the following members: John Appleby-Fleming, Sarah Black, Rachael Donohoe, Jenny Lumley-Holmes, Steve Mortley, Mary Peters, Deborah Shaw, Anne Spaight and Niroshan Siriwardena.

References

Footnotes

  • Funding Department of Health.

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.