Intended for healthcare professionals

Editorials

Making public health interventions more evidence based

BMJ 2004; 328 doi: https://doi.org/10.1136/bmj.328.7446.966 (Published 22 April 2004) Cite this as: BMJ 2004;328:966
  1. Betty Kirkwood, professor of epidemiology and international health (betty.kirkwood{at}lshtm.ac.uk)
  1. Nutrition and Public Health Intervention Research Unit, Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London WC1E 7HT

    TREND statement for non-randomised designs will make a difference

    The movement towards evidence based public health policy has been gaining momentum over the past decade. It takes an important step forward with the recent publication of the TREND statement (transparent reporting of evaluations with non-randomised designs).1 Its aim is to improve the quality of reporting of non-randomised evaluations so that the conduct and findings of such research are transparent and information that is critical for research synthesis is not missing, and to do for public health evaluations what the CONSORT statement has done for randomised controlled trials.2

    The publication of the TREND statement reflects the increasing recognition that successful evaluation of public health interventions will necessarily entail the use of research designs other than controlled trials35 and various types of evidence, often in combination.4 6 The reasons for using such interventions include the following.

    Firstly, the intervention is already well established or its delivery is by nature widespread—for example, evaluation of the efficacy of BCG in different settings3 or of the current advertisement campaign in the United Kingdom to encourage adherence to speed limits in built up areas. No control groups exist; the evaluations need to be based on comparisons before and after the intervention and on comparisons of adopters with non-adopters.

    Secondly, the intervention has been shown to be efficacious or effective in small scale studies, conducted under ideal conditions, but its effectiveness needs to be shown when scaled up and carried out under routine conditions.6

    Thirdly, the intervention is multifaceted and the pathways to impact are complex. Victora et al argue that an impact achieved in randomised controlled trials will not convince policy makers unless it is accompanied by additional evidence showing changes in intermediate process outcomes and differences between adopters and non-adopters of the intervention.6

    Fourthly, ethical issues in the use of a control group, such as occurs when the intervention has known benefits but its efficacy against an important outcome is not known, or when patient choice needs to be factored in.7 This issue was overcome in the Gambia hepatitis B vaccine trial of the long term impact on liver cancer, by using a “stepped wedge design,” with the vaccine introduced district by district on a staggered basis and the order of introduction chosen at random.8

    The TREND statement follows the exact format of the revised CONSORT statement, retaining the same 22 items, with revised descriptions relevant to non-randomised designs. Some important enhancements have been made that are also relevant to randomised controlled trials evaluating public health interventions. Item 2 (background) now includes the underlying behavioural or social science theory used to develop the intervention, and item 4 (interventions) encourages a more detailed description of both the content and the delivery of the intervention.

    The authors' vision is that adoption of the TREND reporting guidelines will ensure that comparable information across studies can be consolidated and translated into generalisable knowledge and practice more easily.1 Have they got it right?

    The answer is both yes and no. The authors rightly say that this is work in progress and that improvements might be necessary. With this publication they aim to start a dialogue; they invite comments and feedback. Their decision to follow rigidly the 22 items of the CONSORT statement is a major limitation—this is not a case where one size fits all. Although I strongly endorse the suggestion that alternative ways, such as linked web pages, are needed to tackle fully the level of detail needed if an intervention is to be reproducible,9 I encourage a rethink and expansion to include named items relating to the development of interventions, and additional items for process and confounding variables. I would also redo item 8 (renamed assignment method), which attempts to capture the evaluation design used. This is the weakest part of the TREND statement, and it needs to be expanded to capture the whole range of evaluation designs; at present it is biased towards the evaluation of newly introduced interventions. I recommend an entry called evaluation design, including separate items for the two main dimensions4: comparisons to be used (before and after, adopters v non-adopters, intervention v control and whether randomised or not,) and design of data collection (longitudinal, cross sectional, case-control).

    However, this is an excellent and encouraging start and an important milestone in public health research. Having the TREND statement of agreed reporting standards for non-randomised designs increases their scientific credibility and draws attention to the scientific rigour involved in their conduct and design. It should challenge the prejudice that evaluation research is second rate and encourage more to do such research. We should look forward to its continuing development and evaluation.

    Footnotes

    • Competing interests None declared.

    References