Personalising results from large trials
BMJ 2015; 350 doi: https://doi.org/10.1136/bmj.h553 (Published 20 February 2015) Cite this as: BMJ 2015;350:h553- Rafael Perera, professor and director of Medical Statistics Group,
- Richard J Stevens, associate professor and deputy director of Medical Statistics Group
- 1Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, OX2 6GG
- Correspondence to: R Perera rafael.perera{at}phc.ox.ac.uk
Health professionals make decisions for individual patients, but clinical trials typically tell us which interventions work on “average.” Knowing with complete certainty that an intervention will (or will not) improve this patient’s health is perhaps an unattainable ideal. However, concepts of personalised medicine seek to approach this ideal, refining information from trials for well characterised subgroups of patients.
Advances in diagnosis, designed to improve the characterisation of patients, create new ways to subdivide datasets and hence reanalyse trials. Meanwhile, bigger, longer, and better designed studies, together with greater interconnectivity between researchers across the globe, create large datasets in which previously impractical subgroup analyses now seem feasible. However, just because we can reanalyse available data, should we? What are the potential pitfalls and problems in subgroup analyses of clinical trials?
In a linked paper (doi:10.1136/bmj.h454), Sussman and colleagues present a reanalysis of a large diabetes trial, the Diabetes Prevention Program, which “could decrease drug overuse, help to prioritize lifestyle programs, and be a model for the secondary analysis of …
Log in
Log in using your username and password
Log in through your institution
Subscribe from £184 *
Subscribe and get access to all BMJ articles, and much more.
* For online subscription
Access this article for 1 day for:
£50 / $60/ €56 (excludes VAT)
You can download a PDF version for your personal record.