Computerised cognitive behaviour therapy (cCBT) as treatment for depression in primary care (REEACT trial): large scale pragmatic randomised controlled trial
BMJ 2015; 351 doi: https://doi.org/10.1136/bmj.h5627 (Published 11 November 2015) Cite this as: BMJ 2015;351:h5627
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
We have enjoyed reading the debate surrounding the REEACT trial, especially since it raised some interesting unresolved issues in cCBT research (e.g. the predominance of developer-led trials, and the unclear effectiveness of the popular and free-to-use MoodGYM programme). Of high relevance to this debate, we conduced a meta-analysis of MoodGYM, which is has just been published in the Australian and New Zealand Journal of Psychiatry: http://anp.sagepub.com/content/early/2016/07/05/0004867416656258.abstract It is worth noting that we were not in any way involved in the development of MoodGYM and we do not benefit from its success.
In brief, our main findings were as follows:
Comparisons from 11 studies demonstrated MoodGYM’s effectiveness for depression symptoms at post-intervention, with a small effect size (g = 0.36, 95% confidence interval: 0.17–0.56; I2 = 78%). Removing the lowest quality studies (k = 3) had minimal impact; however, adjusting for publication bias reduced the effect size to a non-significant level (g = 0.17, 95% confidence interval: −0.01 to 0.38). Comparisons from six studies demonstrated MoodGYM’s effectiveness for anxiety symptoms at post-intervention, with a medium effect size (g = 0.57, 95% confidence interval: 0.20–0.94; I2 = 85%). Both the type of setting (clinical vs non-clinical) and MoodGYM-developer authorship in randomised controlled trials had no meaningful influence on results; however, the results were confounded by the type of control deployed, level of clinician guidance, international region of trial and adherence to MoodGYM.
We hope that our paper is of interest to cCBT research stakeholders and that our findings have added value to the debate surrounding the issues raised by the REEACT trial. We would like to highlight that MoodGYM-developer authorship did not have a meaningful impact on results, and this shows that assumed ‘conflicts of interests’ are not always indicative of bias.
Many thanks
Conal Twomey & Dr Gary O'Reilly
Competing interests: No competing interests
On December 17 members of the COMPare project wrote a rapid response to the REEACT study report alerting us to the fact that only one of two registered primary outcomes was included in the manuscript and that a secondary outcome was not reported. We agree that we should have identified these discrepancies between the registration and the manuscript during our review process. The BMJ is committed to only publishing the results of trials that are prospectively registered and we ask that authors report the registered outcomes: when there is a discrepancy we ask for an explanation. But, as was evident in this case, we sometimes make mistakes.
After receiving the communication from COMPare, we asked the authors for an explanation. They submitted a series of rapid responses, corresponded with us directly and submitted a correction to the paper explaining the discrepancies and clarifying which outcomes were pre-specified. We are satisfied with their response and are convinced about the integrity of the research.
Competing interests: No competing interests
We thank Mr Drysdale and colleagues for their response.
At the risk of an extended correspondence on these issues which may imply we are questioning the value of their work, we would make the following final points.
We accept that our reporting of the change in the primary depression outcome in the BMJ paper could have been better. Whether our error is fairly labelled with the stark term ‘outcome switching’ is open to debate, but we accept that, by rule, we have failed to be entirely transparent and we meet their criteria for such a rating. The BMJ has added additional information for readers (http://www.bmj.com/content/352/bmj.i195) and we have learned a valuable lesson for the future – which we agree is a positive outcome of their project. There was complete transparency in relation to protocol changes in our full report (freely available in the public domain) where we devoted a whole chapter to the subject.
We do not accept their response in terms of the EQ5D.
They state ‘publishing quality of life data in a separate academic paper is in no sense a recognised universal convention’.
We did not suggest it was a universal convention – just that it was ‘not unusual’.
They state that ‘we therefore think that readers should have been told that an extra three secondary outcomes were pre-specified and that there was a plan to report these elsewhere’.
The BMJ report does clearly state that ‘we present the results of a full economic evaluation elsewhere’ (reference 32). That could have been more specific (again, a useful learning point), but it is explicit that the BMJ report is not a complete report of outcomes. A casual reader might be forgiven for not wanting to follow that up in interests of time. To ignore that clear reference to other data when judging reporting of outcomes seems odd.
They state that ‘Expecting all readers to know that they should also be reviewing an additional set of documents - which may or may not exist, for a subset of trials depending on funder, with an unspecified time delay - represents an even more impractical method to ensure transparent outcome reporting’
Mr Drysdale and colleagues paint a compelling picture here. However, that picture is not very convincing in the case of REEACT. The existence of the economic outcomes in another document is clearly communicated (a journal title and a designation of ‘in press’ is the maximum reference detail that we were able to provide at that time). It in no way suggests the kind of complex detective work they imply. A simple email to the corresponding author could have cleared that up very quickly – readers will note that our team responded very quickly to their initial letter.
We note that the COMPARE website refers to outcomes reported in a 'final paper', while their BMJ response refers to a 'primary academic publication'. Does the BMJ paper on clinical outcomes meet the criteria for either? Both terms are ambiguous. If they had asked, we would have recommended they treat the final HTA report (published a matter of weeks after the BMJ) as the definitive source, as it is unrestricted in length and includes the maximum level of detail. Again, an email would have been sufficient to clear this up – hardly ‘impractical’, and a reasonable expectation given the serious concerns they are raising.
Again we note that the HTA report contains a whole chapter detailing the inevitable protocol changes that were made in the course of a five year project working under the NHS research governance framework. Maximum transparency; public domain; open access; no switching of outcomes. We also provided everything that was asked of us by the editors of the BMJ in the pre-publication process. Thus our disquiet with their ‘rush to judgement’ on the basis of the BMJ paper alone remains.
We have great sympathy that the COMPARE process may be onerous. As researchers, we are very aware of the pressures of workload and the need for pragmatic decisions to make projects feasible.
However, REEACT l is still assessed as ‘60% of pre-specified outcomes reported’ (as of 14/1/2016) on a public website ‘tracking switched outcomes in clinical trials’. We think that is a serious charge which is neither entirely fair nor accurate given the information in the BMJ paper and the clarifications about the EQ5D which we have provided. Having endured the more onerous task of delivering the REEACT trial (we calculated over 100,000 person-hours), they can perhaps understand that our sympathy with the time requirements of their system is more limited
We wish the team luck with the remainder of their project.
Competing interests: No competing interests
We thank the authors of the REEACT trial for their response to our letter.
The protocol they cite as “pre-specified” [1] is dated 10th December 2012, however data collection for the trial started in December 2009 - 3 years earlier [2]. We have therefore used the ISRCTN registry entry that predates the start of the trial to establish the pre-specified outcomes.
The authors state that in their ISRCTN registry entry the primary outcome is the PHQ-9 [3]. However, in the ISRCTN registry entry [4], the primary outcome is in fact: “depression severity and symptomatology as measured by a validated self-report measure (the Patient Health Questionnaire [PHQ-9]) and the International Classification of Diseases (ICD-10) depression score at four months”. The only reference on Google to “International Classification of Diseases (ICD-10) depression score" is the REEACT trial’s ISRCTN registry entry. We suspect the phrasing may refer to the WHO’s “Major Depression Inventory”, but believe this constitutes an imprecisely specified outcome. In any case it is clear from the registry entry that there is an additional outcome to the PHQ-9, that has not been reported or mentioned in the BMJ trial publication [2].
Item 6a of the CONSORT statement states that “all outcome measures, whether primary or secondary, should be identified and completely defined” and that details of how and when they were assessed should be reported in the trial publication [5]. Item 6b further states that “any changes to trial outcomes after the trial commenced, with reasons” should also be reported. Where pre-specified outcomes will be reported elsewhere, or non-prespecified outcomes have been added to the published report, this should be declared. Furthermore any changes to the pre-specified primary outcome, even if considered trivial, should be set out in the paper.
The authors go on to argue that the pre-specified secondary outcomes, “Health state utility (EuroQol [EQ5D]) at 4, 12 and 24 months” are not required to be reported as they represent economic as opposed to clinical results. We disagree: publishing quality of life data in a separate academic paper is in no sense a recognised universal convention, and we therefore think that readers should have been told that an extra three secondary outcomes were pre-specified and that there was a plan to report these elsewhere.
The authors lastly suggest that, where studies are funded by NIHR, readers should already know that further information on changes to the pre-specified outcomes could be gleaned by reading various other online NIHR documents associated with the trial. We view that suggestion as problematic. By the authors’ own tacit admission these documents were not available when their BMJ paper was published, and when our assessment was conducted. Furthermore, NIHR-funded trials represent only a tiny fraction of medical research, and so very few readers will know that additional documents may be available, where outcome switching may be declared or discussed, for trials with this specific funding source. CONSORT guidance states clearly that the information needed to detect such changes should be presented clearly in the trial report. From our experience it can take up to 5 hours per trial to check for outcome switching, even using only the protocol, registry entry, and journal publication. Relying on readers to check, even in these documents alone, is therefore impractically onerous. Expecting all readers to know that they should also be reviewing an additional set of documents - which may or may not exist, for a subset of trials depending on funder, with an unspecified time delay - represents an even more impractical method to ensure transparent outcome reporting, especially when the fact that these documents may contain additional information on outcome switching is not even mentioned in the primary academic publication of the trial’s results.
The deviation in sample size mentioned is beyond the aims and protocol of COMPare, and is not mentioned in our assessment or letter. The authors mention that we do not praise other aspects of their trial: again, these are outside the remit of COMPare, which is solely to identify and then flag incomplete outcome reporting to journal editors and the readers of trial reports, in order to drive improvement on this prevalent problem. It may be useful to note that this specific trial scores very much more positively for outcome reporting than most other trials we have reviewed: however this reflects the chaotic state of information management in the medical literature more broadly.
We thank the authors of REEACT for their open discussion of our analysis.
Yours faithfully,
Henry Drysdale, Ben Goldacre, and Carl Heneghan on behalf of the COMPare team.
References:
[1] REEACT Trial Protocol version 12, 10/12/2012: http://www.nets.nihr.ac.uk/__data/assets/pdf_file/0003/51438/PRO-06-43-0...
[2] Gilbody S et al, Computerised cognitive behaviour therapy (cCBT) as treatment for depression in primary care (REEACT trial), BMJ 2015;351:h5627.
[3] Gilbody S et al, REEACT is entirely consistent with CONSORT and the authors exceeded reporting requirements of CONSORT and COMPare, BMJ Rapid Response 21/12/2015.
[4] REEACT trial registry entry: http://www.controlled-trials.com/ISRCTN91947481/91947481?link_type=ISRCT...
[5] Moher D et al, CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials, BMJ 2010; 340:c869.
Competing interests: No competing interests
We thank Henry Drysdale and colleagues for their interest in our trial. They correspond on behalf of the COMPare project team who are ‘analysing each trial for outcome switching, by comparing the clinical trials registry (or ideally a trial protocol)’ [1].
There is no inconsistency between our trial report and our pre-specified trial protocol which has been in the public domain for several years [2]. The transparency of our reporting exceeds all CONSORT requirements. We recorded depression severity using the PHQ9 which we analysed as both a dichotomous outcome (as validated against ICD criteria) and as a continuous outcome. The primary trial endpoint was a dichotomous outcome derived from the PHQ9 at four months, and we also reported this outcome at 12 and 24 months. We specify our primary outcome as the PHQ9 on the trial registry, where there is also a link to our full trial protocol (exceeding the level of reporting required by the ISRCTN trial registry, and representing the ‘ideal’ of the COMPare project). For the avoidance of doubt we have reproduced the relevant excerpt from the trial protocol which is presented within the ISRCTN registry [2] and included in appendix 2 of the BMJ report [3]. ‘Primary outcome measure: our primary outcome will be depression severity and symptomatology as measured by a validated self-report measure (the Patient Health Questionnaire-9) at four months. The PHQ9 is a nine-item questionnaire, which records the core symptoms of depression. There are extensive US and non-US validation and sensitivity to change data. It has most recently been validated in a UK primary care population…….The primary outcome, depressed/not depressed (at four months) will be used in a logistic regression model to compare each of the computerised packages with usual care alone.’ In the trial registry and trial protocol we also specified the PHQ9 at 12 and 24 months as secondary outcomes.
Henry Drysdale and colleagues highlight that we omitted to present one of our secondary outcomes (EQD – the ‘EuroQoL’; our chosen measure of quality adjusted life years or ‘QALYs’). The BMJ report presented the full clinical outcomes of the REEACT trial [3] and we were not asked by the editors or peer reviewers to present the results of the full economic evaluation (where QALYs and incremental cost effectiveness ratios are the metric of interest). This is not an omission on our part, but reflects a separation of clinical and economic results which sometimes happens (even in the BMJ). Far from seeking to omit this secondary outcome, we currently have a paper in preparation to report the economic dimension of cCBT. Should the BMJ wish to publish this, we will be happy to oblige.
The only deviation we can see from the ISCRTN entry is the fact that we exceeded our initial trial sample size (691 in the published report versus 600 in the trial registry). We don’t think this is a hanging offence, and we did this to ensure we maintained our level of pre-specified statistical power when follow up was a little lower than we anticipated (such things do happen). We note that trials commonly fail to achieve their pre-specified sample size and Mr Henry Drysdale and colleagues [writing as members of the Centre for Evidence Based Medicine] failed to find anything positive to say about our trial in this [or any other] respect.
We do not know whether the correspondents have extensive experience of HTA funding and publication policies, but all their trials require a full NIHR report where all outcomes will be presented in their entirety, after publication of other papers. Splitting clinical and economic outcomes is also not unusual. Although we applaud their sterling work, they might wish to consider that trial publications should really be considered in their entirety in the COMPare process. The vagaries of publications schedules means that they may need to give authors a little more time before potentially rushing to judgement about 'missing outcomes' that are 'in press' in the publication machine. A full report commissioned by the NIHR Health Technology Assessment programme is now published in the NIHR Library and we commend this report to interested readers. We include the results of the clinical, economic and quantitative evaluations of cCBT derived from the REEACT trial [4]. We wish Mr Drysdale and colleagues well with the COMPare project and thank them for the opportunity to respond to their query.
[1] COMPare project website www.COMPare-Trials.org
[2] Full REEACT trial protocol at http://www.nets.nihr.ac.uk/__data/assets/pdf_file/0003/51438/PRO-06-43-0...
[3] Gilbody S et al, Computerised cognitive behaviour therapy (cCBT) as treatment for depression in primary care (REEACT trial), BMJ 2015;351:h5627.
[4] Littlewood E, Duarte A, Hewitt C, Knowles S, Palmer S, Walker S, et al. A randomised controlled trial of computerised cognitive behaviour therapy for the treatment of depression in primary care: The Randomised Evaluation of the Effectiveness and Acceptability of Computerised Therapy (REEACT) trial. Health Technology Assessment 2015; Volume 19 (Number 101).
Competing interests: No competing interests
Dear Editor,
Your recent publication Gilbody et al [1] reports outcomes that are different to those initially registered [2].
There was one pre-specified primary outcome, which was incompletely reported in the paper. The PHQ-9 score was reported as a dichotomous outcome that was not pre-specified, and the ICD-10 depression score was not reported in the paper. There were 4 pre-specified secondary outcomes, of which 3 were reported; while one (EQ-5D) is not reported anywhere in the publication. The paper also reports three new secondary outcomes, PHQ-9 > 10 at three separate timepoints. These were not pre-specified, and not declared as such.
The BMJ has endorsed the CONSORT guidelines on best practice in trial reporting [3]. In order to reduce the risk of selective outcome reporting, CONSORT includes a commitment that all pre-specified primary and secondary outcomes should be reported; and that, where new outcomes are reported, it should be made clear that these were added at a later date, with an explanation of when and for what reason.
This letter has been sent as part of the COMPare project [4]. We aim to review all trials published from now in a sample of top journals, including the BMJ. Where outcomes have been incorrectly reported we are writing letters to correct the record, and to audit the extent of this problem, in the hope that this will reduce its prevalence. We are maintaining a website at COMPare-Trials.org where we are posting a summary of the results for each trial that we analyse. We hope that the BMJ will publish this response, to ensure that those using the results of this trial to inform clinical decision-making are aware of the discrepancies.
Yours faithfully,
Henry Drysdale, Eirion Slade and Kamal Mahtani on behalf of the COMPare project team.
[1] Gilbody S et al, Computerised cognitive behaviour therapy (cCBT) as treatment for depression in primary care (REEACT trial), BMJ 2015;351:h5627.
[2] Trial registry entry: http://www.isrctn.com/ISRCTN91947481?q=91947481&filters=&sort=&offset=1&...
[3] Moher D et al, CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials, BMJ 2010; 340:c869.
[4] COMPare project website www.COMPare-Trials.org
Competing interests: No competing interests
Response by Gilbody and colleagues on behalf of the REEACT collaborative
We thank all the correspondents who took the time to read and comment on the REEACT trial.1 Our trial has provoked important discussions around the place of computerised therapy in modern care systems and the challenges of researching such interventions. Our trial is not the last word on cCBT but does illustrate that the value of computer delivered psychological therapy cannot be assumed and that evidence of efficacy does not necessarily translate into real world effectiveness. The REEACT trial demonstrated that two computer programmes that are still routinely offered in UK NHS primary care mental health services and which are recommended by NICE did not, on average, show any benefit for people with depression when added to routine NHS primary care.
Several correspondents recognised the value of the REEACT trial but some important issues were raised. Space prevents us from responding to each, but there were a number of points which deserve some comment and response.
Efficacy versus effectiveness
Several correspondents would have preferred if we had conducted another efficacy study, rather than a test of real world effectiveness.2 There is ample randomised evidence to show that cCBT can be efficacious when delivered under ideal conditions; when compared to nothing at all or when delivered with very high levels of support (often guided by experienced psychological therapists).3 Our trial adds little to this body of literature, but there remain important uncertainties relating the real world effectiveness of cCBT and to which REEACT speaks. The majority of patients with depression are treated entirely in primary care, and the REEACT trial is one of the first (and certainly one of the largest) trials to test the effectiveness (rather than efficacy) of this technology in a primary care setting.
For GPs, patients and commissioners of services a very important question is ‘what is the benefit I can expect to see if a person with depression is offered cCBT in addition to the usual care which they already receive?’ This is a question relating to effectiveness rather than efficacy and was the question posed to the UK research community by the body which commissioned the REEACT trial – the National Institute of Health Research (NIHR) Health Technology Assessment Programme. We think our results are informative and a number of criticisms relate to the choice of a pragmatic rather than explanatory design and the primary care setting of our trial. We defend our research in the face of this criticism.
Lack of engagement with cCBT
Several correspondents have sought to explain away the negative results of the REEACT trial with reference to a number of factors which were inherent in our pragmatic design. People consented to partake in a trial of cCBT and we can therefore assume that they were interested in cCBT as a therapeutic option. We sought to engage people with weekly phone contact and reminders to use the programmes. The vast majority logged on to the systems to look at the programmes. We know this since we were able to check computer records. However very few people returned to complete a second session, and the proportion of people who completed all sessions was very small indeed. We maintain this is a very important finding and the most significant contribution of the REEACT trial. We note that none of the correspondents suggested that the level of support that was offered in the REEACT trial was atypical of that which is offered in routine NHS practice and to that end we succeeded in delivering a fair test of real world effectiveness. Our research suggests that a higher level of support than was delivered in REEACT is needed to enhance uptake. This would require a re-design of current NHS services and a level of support that is greater than that which is recommended in the two programmes that were trialed. The effectiveness of enhanced levels of support needs to be tested in large scale effectiveness trials and we have our own programme of ongoing research which speaks to this issue. In the meantime we can be confident that cCBT as delivered in UK primary care did not demonstrate uptake or effectiveness. This remains an important finding.
Cross over and dilution of effect
One explanation for the negative result offered by some correspondents was the reported use of computer resources by a small proportion of participants in the usual care arm, which might have diluted the effect of cCBT. We think this is a red herring and a diversion from the finding of lack of engagement. There would needed to have been a very substantial cross over with high levels of engagement in the usual care arm, and a very powerful effect from only a very small number of sessions for cCBT to have been effective, but for this effect to have been diluted by crossover. We do not think this explanation stands up to scrutiny, and crossover effects are to be expected in pragmatic trials. The effects of cCBT were likely to be minimal in both intervention and usual care arms based on the lack of engagement alone. Invoking Occam’s razor, this is a more parsimonious explanation.
Compared to what?
REEACT was an effectiveness trial and is one of the first trials to evaluate cCBT in terms of the actual value that can be expected under real world conditions and when compared to usual GP care. Several correspondents commented that the outcomes under usual care make it difficult to demonstrate the benefits of cCBT, and would have preferred a ‘do nothing’ comparator arm. Such studies have already been undertaken and we have noted the results of these, but they are not a sufficient level of evidence in establishing the value of new technology. Demonstrating effectiveness is always more difficult than demonstrating efficacy, but this is the level of evidence that is demanded in health technology assessment. Judging the value of treatments in the face of discordant evidence between efficacy and effectiveness studies is an unenviable task for bodies such as NICE.
Opening up the black box to understand how people engage with cCBT
The REEACT benefited from a concurrent qualitative evaluation which has been published in BMJ Open by Knowles and colleagues.4 So whilst many correspondents speculated as to why cCBT worked for some but not for others, we have the benefit of some real data to illuminate this question. The main themes that emerged were the fact that people with depression are often demotivated by this condition and they found it difficult to maintain their enthusiasm for a computer-based treatment modality. Participant experience was on a continuum, with some patients unable or unwilling to accept psychological therapy without interpersonal contact while others appreciated the enhanced anonymity and flexibility of cCBT. The majority of patients were ambivalent, recognizing the potential benefits offered by cCBT but struggling with challenges posed by the severity of their illness, limited support and lack of personalization of programme content. Both positive and ambivalent patients perceived a need for a greater level of monitoring or follow-up to support completion, while negative patients reported deliberate non-adherence due to dissatisfaction with the programme. We also learned that many people valued the regular contact provided via telephone support. We knew that several participants continued to accept support phone calls despite no longer using the computer programmes. This raises the question about what level of support is needed over and above that currently offered in the NHS for cCBT to work. This is an interesting question but is a different question from that which was asked in the REEACT trial.
(B)leading edge technology?
One correspondent pointed out that Beating the Blues and MoodGYM were not now considered to be leading edge computer technologies. We have sympathy with this position, since inherent in computer technology is the need for continual innovation. However we counter this argument with the fact that there was existing trial based evidence that these technologies could work,5 and these were NICE-recommended treatments at the time of design of the REEACT trial.
The lack of engagement and absence of a clinically important effect were so clear that the REEACT trial raises fundamental questions about the way in which people with depression interact with computers or indeed other self-help technologies. This goes beyond the choice of computer programme. We would also assert that it remains important that all new technologies should be trialed since effectiveness can never be assumed. Such technologies are never resource free, and there is an opportunity cost when healthcare systems invest in unevaluated technologies which later turn out to be ineffective.
What is to be done?
We remain optimistic that cCBT does have a role in the management of depression but the REEACT trial shows that the expectations of cCBT do not readily translate into tangible patient benefits under conditions of routine care and delivery. Our main conclusion is that the level of support that needs to be offered in the context of routine care is much higher than is currently the case in publicly funded healthcare systems such as the NHS. Our qualitative research4 and systematic review evidence3 supports this assertion, and suggests that greater levels of guidance and support might result in patient benefit. This is an empirical question which needs to be answered in a well-designed pragmatic effectiveness study with a model of care which exceeds usual levels of NHS support. Fortunately we have undertaken such a study (REEACT2 ISRCTN55310481) and the results of this research will be available in 2016.
Finally we thank the correspondents for an interesting debate, and the interest in our trial reflects the importance of this topic and the need for effective low intensity interventions for depression that can be delivered at scale in the NHS. We also note that our trial has generated a debate about the value of computer-mediated psychological therapy which has not hitherto taken place.
1. Gilbody S, Littlewood E, Hewitt C, Brierley G, Tharmanathan P, Araya R, et al. Computerised cognitive behaviour therapy (cCBT) as treatment for depression in primary care (REEACT trial): large scale pragmatic randomised controlled trial. BMJ 2015;351:h5627.
2. Schwartz D, Lelloch J. Explanatory and pragmatic attitudes in therapeutic trials. Journal of Chronic Diseases 1967;20:637-48.
3. Andersson G, Cuijpers P. Internet-based and other computerized psychological treatments for adult depression: A meta-analysis. Cog Behav Ther 2009;38(4):196-205.
4. Knowles SE, Lovell K, Bower P, Gilbody S, Littlewood E, Lester H. Patient experience of computerised therapy for depression in primary care. BMJ open 2015;5(11):e008581.
5. Proudfoot J, Goldberg DP, Mann A, Everitt B, Marks IM, Gray JA. Computerised, interactive, multimedia cognitive behavioural therapy for anxiety and depression in general practice. Psychol Med 2003;33:217-27.
Competing interests: No competing interests
I note with interest the findings of the REEACT trial (1), which provides a quantitative evaluation of the effectiveness of both MoodGYM (2) and Beating the Blues (3) for depression in a primary care context. This work has taken place as part of a larger research community effort to characterise the effectiveness of a variety of Computerised CBT (CCBT) offerings, in a broad range of settings. As noted in the REEACT paper, one intriguing characteristic of MoodGYM is that access is provided free and uncontrolled, through a standard web-browser. This contrasts with Beating the Blues, where access requires payment of a subscription fee, and whose deployment therefore entails a direct cost to any healthcare providers who choose to engage with it (4).
As an expert in Human-Computer Interaction, I have been studying the interface presented by MoodGYM so as to try and understand the experience of engagement on the part of a user experiencing depression. In conducting this work, I have identified a set of concerns with features of MoodGYM’s interaction framework, of which primary care practitioners might usefully be aware. REEACT suggests that MoodGYM is no more effective than standard GP care. I would suggest that some of the following observations could be considered as contributory causes, subject to research to gather evidence on this hypothesis.
CBT itself is a mature approach to the treatment of mental illness, which is grounded in a detailed understanding of the subtle interactions between cognition and behaviour, and which also considers their impact on emotion. MoodGYM provides an on-line training programme presenting structured exercises to try and teach concepts from CBT; users are told that they will “Learn cognitive behaviour therapy skills for preventing and coping with depression”. A substantial concern with MoodGym is that complex concepts drawn from CBT practice are presented in a manner which is too superficial, and not in keeping with this practice. This is most apparent in the foundational concept of “What You Think Is What You Feel” (give the acronym of WYTIWYF), which is introduced early in the interface, and regularly reinforced throughout the trajectory of engagement. An inherent problem with WYTIWYF is that it excludes behavioural elements . This feels like a major omission, given CBT’s emphasis on directly addressing damaging behaviours as an important cause of discomfort (5).
WYTIWYF also implies a very direct causal link between cognition and emotion, and this leads to some specific interactions which might be problematic in some circumstances. At one point in the training programme, MoodGYM asks the user to provide a textual description of an emotional reaction that they have experienced in the last few weeks, and then later asks them to identify the cognitive distortion that caused it (after working through a teaching unit which considers a variety of common cognitive distortions previously identified by CBT practitioners). Above and beyond the difficulty of expressing an emotional reaction in plain text, which may be simply too challenging for some users, this interaction might be considered inappropriate for a user experiencing depression linked to profound grief caused by the loss of a loved one. In this context, it feels inappropriate, and potentially offensive, to suggest that this reaction might be caused by a cognitive distortion that can be fixed, rather than by a natural and ongoing reaction to an important emotional event.
The primary delivery mechanism for teaching in MoodGYM is the usage of cartoon-like fictional characters to humanize abstract concepts developed by the community of CBT practitioners. This is somewhat in keeping with the approach taken by Beating the Blues, though the latter uses recorded video of experiences of mental illness to help users understand the meaning of specific conceptualisations. Though characterisation is an interesting pedagogical approach in of itself, a specific choice within MoodGYM to name one character Mr. Creepy Angry seems highly contentious in the context of usage by individuals experiencing fragile mental health. When first presented to the user, Mr. Creepy Angry is accompanied by a by-line saying “This is Mr. Creepy Angry. He really has problems”. This seems incredibly inappropriate in the context of a broader psychotherapeutic community which has long understood the value of avoiding negative judgements on individuals, and on promoting self-esteem and self-efficacy (6). A second character, Noproblemos, is set up as an ideal to work towards; here, the idea that there is an ideal mental state seems excessively judgemental and limiting in a therapeutic context.
MoodGYM also has a variety of other features which seem inappropriate in the case of usage by individuals who are experiencing mental discomfort and, potentially, profound despair. Selected examples include a choice to structure one teaching unit as a humorous and facile television quiz featuring the various MoodGYM characters, and an overly legalistic introductory disclaimer which is more than a thousand words long, in small type. The latter might be seen as problematic in the context of depression as an illness which can be accompanied by substantial difficulties with memory and cognition (7).
Collectively, the observations presented in this critique, combined with a clear imperative to treat those suffering from depression in a sensitive and appropriate manner, might raise doubts as to the appropriateness of recommending MoodGYM in a primary care context. At present, those interested in engaging with CCBT as a treatment approach might well be better directed towards Beating the Blues, which has been the subject of a NICE clinical recommendation (4), implying a rigorous examination by healthcare experts.
An important issue to note, however, is that the availability of free access to MoodGYM means some individuals may choose to engage with it before seeking treatment from healthcare professionals. These individuals might then acquire a set of misunderstandings about the nature of CBT, which might then hinder the process of their treatment. How best to support practitioners in responding to this situation is an open question, especially in the context of an ever-increasing range of technologies that claim to offer benefits to health, but which carry the potential to be damaging or misleading if poorly designed.
References
1 Gilbody S, Littlewood E, Hewitt C, Brierley G, Tharmanathan P, Araya R et al. Computerised cognitive behaviour therapy (cCBT) as treatment for depression in primary care (REEACT trial): large scale pragmatic randomised controlled trial. BMJ 351. 2015.
2 moodgym.anu.edu.au [Internet]. Canberra, Australia: Australian National University; [cited 2015 Dec 8]. Available from: https://moodgym.anu.edu.au/.
3 beatingtheblues.co.uk [Internet]. Liverpool, United Kingdom: Health and Wellbeing Ltd.; [cited 2015 Dec 8]. Available from: http://www.beatingtheblues.co.uk/.
4 National Institute for Clinical Excellence. Depression in adults (update). NICE Clinical Guideline 90. Available from: http://publications.nice.org.uk/depression-cg90/person-centred-care.
5 Beack JS. Cognitive behaviour therapy: Basics and beyond. 2nd Edition. New York: The Guildford Press; 2011.
6 Rogers C. On becoming a person. New Ed ed. Robinson; 2004.
7 World Health Organisation. The ICD-10 Classification of Mental and Behavioural Disorders. Available from: www.who.int/classifications/icd/en/bluebook.pdf
Competing interests: No competing interests
We commend the authors of the REEACT trial for conducting a large-scale pragmatic randomised controlled evaluation of the effectiveness of offering either computerised cognitive behaviour therapy (cCBT) (Beating the Blues or MoodGYM) in addition to usual care for depression in UK primary care settings.
Computerised CBT packages available on the internet, especially ones that are free to the consumer and the NHS, widen access to treatment in a variety of important ways, such as choice, immediate access, contextualisation to personal need, not having to ask other people for help and lack of stigma from having a consultation for depression recorded in case notes. These considerations regarding scale of delivery, efficiency and accessibility are particularly important for people with depression, for whom it is estimated that only 25 per cent receive any health service treatment at all [1].
The authors correctly state that as a pragmatic trial they have not set out to test the efficacy of cCBT for depression already established in meta-analyses [2]. However, in our view, their headline conclusion that ‘supported (our italics) computerised cognitive behaviour therapy confers modest or no benefit over usual GP care’ overstates what the REEACT study can tell us regarding the effectiveness of cCBT more generally, as the cCBT interventions in this trial were unguided and unsupported by clinicians. Research has consistently shown that unguided cCBT delivered without clinician support has smaller effects and poorer adherence than clinician guided cCBT [3, 4]. Hence, the results of the REEACT trial are hardly surprising given what we already know. However, our main concern is that that the authors’ conclusions regarding the lack of effectiveness of ‘supported’ cCBT for depression in primary care could easily be misinterpreted by policy makers and commissioners of services as undermining the benefits of cCBT and digital interventions more generally, in particular where clinician support is provided and effectiveness has been clearly demonstrated and equivalence shown to face to face CBT [5].
In our view, the most important result of this trial, and key reason for lack of effectiveness of the interventions, is the very low acceptability and uptake of cCBT when delivered without clinical or therapist support for patients with depression in primary care. The self-guided stand-alone computerised interventions, delivered without clinician support/ guidance, were unpopular and infrequently used by patients. Less than 20% of patients allocated to either Beating the Blues or MoodGym completed the full package and the median number of sessions utilised was just one in both groups. It is well known that clinician supported cCBT produces larger effects than unsupported self-guided cCBT, and the author’s acknowledged that the trial was probably underpowered to detect the small effect sizes (d=0.2 to 0.25) reported in trials of unsupported cCBT [6]. The larger effect sizes for ’supported’ cCBT interventions all include some level of remote therapist feedback, motivational support and interaction. In the REEACT trial, only ‘technical’ support was offered which amounted to an average of only 6 minutes in both intervention arms. Therefore, we think it is potentially misleading to characterise the REEACT trial as demonstrating the effectiveness of ’supported’ cCBT. The risk of labelling these interventions as ‘supported’ cCBT is that clinician supported cCBT may be viewed by policy makers and commissioners as equally ineffective. In addition, given the poor adherence to cCBT in this trial, it would have been helpful if the authors had presented a ‘dose-response’ analysis to test whether the adherence to the cCBT intervention predicted a better clinical response.
As well as to the likelihood that the REEACT trial was underpowered to detect the smaller effect size expected with self-guided cCBT delivered without clinician support, other design factors such as cCBT contamination effects between usual care and treatment arms, high levels of antidepressant use and the choice of a categorical recovery threshold may have masked or minimised a genuine, albeit small, intervention effect. Given that the PHQ-9 score at baseline was in the moderate to severe range, it is possible that self-guided cCBT could lead to small but worthwhile improvement in depressive symptoms although patients may still remain above the PHQ-9 threshold of 10 for ‘caseness’.
In hindsight, an internal pilot with the first 20 to 30 participants might have been carried out to find out the minimum therapeutic support people would be willing to receive before embarking on a RCT testing interventions that were unacceptable and unused in 691 people. The authors claim that they offered more support than for cCBT provided by IAPT but don’t provide any evidence to back up this statement. Self-guided treatments for depression without clinician support are likely to appeal only to a minority who are very motivated for this kind of treatment and are capable of finishing the intervention without stimulation from an external therapist or coach.
Finally, for policy makers and commissioners this study offers a useful lesson concerning what we should not do when implementing this type of intervention. It also illustrates the importance of considering the type of human support available and setting in which these interventions are delivered. For patients with depression in primary care, cCBT when delivered as part of stepped-care is most likely to be accepted and take-up when there is at least a minimal level of clinical guidance and support. Accurately specifying this level of support is particularly important when comparing the effectiveness of different interventions. Increasingly, cCBT and other digital interventions are being offered in IAPT and other mental healthcare settings as part of ‘blended’ care delivery which integrates clinician contact and guidance with automated computerised self-help. Because self-guided (unsupported) cCBT can be delivered a near-zero marginal cost, despite small effects and adherence, it may be an effective public health intervention where delivered at scale via sites such as NHS Choices ‘Moodzone’ [7] to those who do not access traditional health services such as primary care. Currently, research supported by NIHR is underway to guide commissioning decisions regarding the clinical- and cost-effectiveness of offering internet-based peer support and self-guided (unsupported) cCBT at a population level outside primary care [8].
References:
[1] Davies, S.C. “Annual Report of the Chief Medical Officer 2013, Public Mental Health Priorities:
Investing in the Evidence” London: Department of Health (2014)
[2] Richards D, Richardson T. Computer-based psychological treatments for depression: a systematic review and meta-analysis. Clin Psychol Rev. 2012;32:329-42.
[3] Christensen H, Griffiths K, Groves C, Korten A. Free range users and one hit wonders: community users of an Internet-based cognitive behaviour therapy program. Aust N Zealand J Psychiatry. 2006;40:59-62.
[4] Johansson R, Andersson G. Internet-based psychological treatments for depression. Expert Rev Neurother. 2012;12:861-70.
[5] Andersson G, Topooco N, Havik OE, Nordgreen T. Internet-supported versus face-to-face cognitive behavior therapy for depression. Expert Review of Neurotherapeutics. In press.
[6] Cuijpers P, Donker T, Johansson R, Mohr DC, van Straten A, et al.) Self-Guided Psychological Treatment for Depressive Symptoms: A Meta-Analysis. PLoS ONE 2011; 6(6): e21274. doi:10.1371/journal.pone.0021274
[7] http://www.nhs.uk/conditions/stress-anxiety-depression/pages/low-mood-st...
[8] Morriss R, Hollis C, Coulson N, Noran P, Avery A, Tata L et al. Randomised controlled trial of an established direct to public peer support and e-therapy programme (Big White Wall) versus information to aid self-management of depression and anxiety. Protocol. National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care East Midlands.29th September 2015.
Competing interests: No competing interests
Re: Computerised cognitive behaviour therapy (cCBT) as treatment for depression in primary care (REEACT trial): large scale pragmatic randomised controlled trial
In this article authors have attempted to assess the effectiveness of computerised interventions in depression. They have concluded that the primary as well as secondary outcomes were not significantly superior. One way of looking at these results is to conclude that computerised CBT techniques are ineffective in the management of depression. On the other hand this article helps us to think about and improving the process of selecting candidates for self-help interventions. The reasons for high attrition in REEACT study could be that lack of motivation is a core feature of depressive disorders.
Computerised interventions are important self-help therapies which are available worldwide. These are used in wide range of conditions from anxiety to depression. They are also used in people with behavioural problems and eating disorders. Research on the effectiveness of self-help interventions in behavioural and eating disorders show positive results. Important factor emerging from this finding is that ‘motivation and readiness to change’ decide individuals’ engagement and positive outcome.
NICE guidance recommends the use of self-help interventions for common mental health problems. If we look at the assessment process / selection criteria, patients with common mental health problems are assessed using standard scales for particular disorders. E.g. PHQ-9 for depression and GAD-7 for anxiety. If the individuals scored above a cut off score and if the suicide or self-harm risk were low they are offered self-help interventions. Individuals readiness for change and motivation levels are not taken into consideration. One of the assumption is that including a scale to assess the motivation level or readiness to change could improve the adherence as well as positive outcomes. Future research could include these components to improve the findings. Also this will pave way for improving IAPT services.
Competing interests: No competing interests