CONSORT statement: extension to cluster randomised trials
BMJ 2004; 328 doi: https://doi.org/10.1136/bmj.328.7441.702 (Published 18 March 2004) Cite this as: BMJ 2004;328:702
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Editor - when followed, the new CONSORT statement will greatly
facilitate the production of accurate estimates of effect for use in meta-
analyses. To date, coefficients of intracluster correlation (ICC) and
exact cluster sizes have not been reported systematically. At present, it
is common practice to 'borrow' suitable ICCs from other studies when
calculating summary statistics. Since the ICC can be affected by many
factors (for example, type of outcome, population, cluster size), this
practice is likely to have led to imprecision in some meta-analyses. The
guidance in the new CONSORT statement to report ICCs for all primary
outcomes is therefore very welcome.
However, the statement might also be the appropriate place to address
an additional source of imprecision. Some meta-analyses are based on
standardised mean differences and therefore require knowledge of the
standard deviation. Calculating this value from data presented from
cluster randomised trials presents particular challenges. For example,
sometimes studies report the standard deviation between clusters;
sometimes the standard error; and sometimes the standard error after
adjusting for baseline values. It is possible to calculate a standard
deviation from these data, but often additional information is required
(for example, the design effect used when the original standard error was
computed, or the multiple regression coefficient when baseline values were
taken into account). Thus, along with an ICC for each continuous outcome,
the production of accurate estimates of effect would be facilitated
greatly by the presentation of standard deviations (between individuals,
not clusters) too.
The new CONSORT statement is a significant step forward in ensuring
that cluster randomised trials are well reported, and, with a small
addition, it could be even more useful for future meta-analyses.
Competing interests:
None declared
Competing interests: No competing interests
Editor – we applaud the timely publication of a revised CONSORT
statement for cluster randomised controlled trials (C-RCTs). We are
involved in the conduct of a C-RCT assessing the effects on 252 general
practitioners (GPs) of different strategies for the implementation of a
guideline for the management of non-complicated type 2 diabetes mellitus
in the region of Lazio (Italy). The design includes three arms: a training
module and administration of the guideline (arm 1), administration of the
guideline without training (arm 2), continue current practice (arm 3).
As the revised CONSORT statement emphasises, limited generalisability
(external validity) may be a recurring problem with the interpretation of
results of C-RCTs similar to ours. A recent review of studies assessing
the effects of measures to implement guidelines reported possible non-
specific effects on the control arms of C-RCTs which could hinder
generalisability of results(1). Little appears to be known on the possible
occurrence and types of such non-specific effects and their impact on
effect size estimation and interpretation of the findings of C-RCTs
conducted in primary care.
As we plan to use routine data for outcome assessment, we may be able
to measure any non-specific effects on arm 3 participants using a before
and after design. We would like to hear from anyone with experience of
such a method of measurement within a C-RCT.
Reference:
1. Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, et
al. Effectiveness and efficiency of guideline dissemination and
implementation strategies.
Health Technol Assess 2004;8(6).
Competing interests:
None declared
Competing interests: No competing interests
Is the intraclass correlation enough?
The editorial by Professor Campbell [1]accompanying the welcome paper
extending the CONSORT statement to cluster randomised trials [2] includes
the statement “The key statistic is the intracluster correlation
coefficient, which is the ratio of the between cluster variation of the
outcome variable to the total variation”.
However it is arguable that it is the variance of the treatment
parameter which really matters for interpretation of the results. The
ratio of this variance (after allowing for the design) to the variance
ignoring the clustering is the design effect, which is determined by both
the intraclass correlation (ICC) and the cluster size.
The design effect can be estimated directly for a proposed study if a
plausibly similar dataset is available or can be constructed, without
calculating the ICC (and in any case the effect of the ICC can be reduced
by reducing the cluster size by sampling). For example, using Stata, the
variance given by the model allowing for cluster effects can be estimated
with the xtpoisson command and compared with the variance obtained by the
ordinary Poisson regression ignoring the clustering. Advantages are that:
a) The multiplier for sample size (design effect)is immediately
available and is estimated using the scale and analysis actually proposed,
including use of a robust estimate of variance.
b) Monte Carlo methods can be used to generate confidence limits for
the design effect (so that a “pessimistic” value can be adopted). It’s
also possible of course to put confidence limits on the ICC.
c) It is not necessary to choose a particular method for calculating
the ICC: many alternative methods for calculating ICC or similar measures
have been proposed for binary (and hence Poisson) data [3,4], of varying
degrees of bias.
Certainly published ICC estimates may help design future studies but
in that case, item 17 in the revised CONSORT statement doesn’t go far
enough: results should also include details of the methods used to
calculate the ICC and design effect as well (both for the sample size
estimate and for the value observed).
References
1. Campbell MJ “Extending CONSORT to include cluster trials” BMJ
2004;328:654-655
2. Campbell MK, Elbourne DR, Altman DG for the CONSORT Group. “The
CONSORT statement: extension to cluster randomised trials”. BMJ (2004)
328: 702-8
3. Goldstein H, Browne W, Rasbash J “Partitioning variation in
multilevel models” Understanding Statistics (2002) 1: 223-232
4. Ridout MS, Demetrio CGB, Firth D “Estimating Intraclass
Correlation for Binary Data” Biometrics (1999) 55: 137–148
Competing interests:
None declared
Competing interests: No competing interests