Article Text

Protocol
Protocol for the process evaluation of a complex intervention delivered in schools to prevent adolescent depression: the Future Proofing Study
  1. Joanne R Beames1,
  2. Raghu Lingam2,
  3. Katherine Boydell1,
  4. Alison L Calear3,
  5. Michelle Torok1,
  6. Kate Maston1,
  7. Isabel Zbukvic4,
  8. Kit Huckvale1,
  9. Philip J Batterham3,
  10. Helen Christensen1,
  11. Aliza Werner-Seidler1
  1. 1Black Dog Institute, University of New South Wales, Sydney, New South Wales, Australia
  2. 2School of Women’s and Children’s Health, University of New South Wales, Sydney, New South Wales, Australia
  3. 3Centre for Mental Health Research, Australian National University, Canberra, Australian Capital Territory, Australia
  4. 4Orygen The National Centre of Excellence in Youth Mental Health, University of Melbourne, Parkville, Victoria, Australia
  1. Correspondence to Dr Aliza Werner-Seidler; a.werner-seidler{at}blackdog.org.au

Abstract

Introduction Process evaluations provide insight into how interventions are delivered across varying contexts and why interventions work in some contexts and not in others. This manuscript outlines the protocol for a process evaluation embedded in a cluster randomised trial of a digital depression prevention intervention delivered to secondary school students (the Future Proofing Study). The purpose is to describe the methods that will be used to capture process evaluation data within this trial.

Methods and analysis Using a hybrid type 1 design, a mixed-methods approach will be used with data collected in the intervention arm of the Future Proofing Study. Data collection methods will include semistructured interviews with school staff and study facilitators, automatically collected intervention usage data and participant questionnaires (completed by school staff, school counsellors, study facilitators and students). Information will be collected about: (1) how the intervention was implemented in schools, including fidelity; (2) school contextual factors and their association with intervention reach, uptake and acceptability; (3) how school staff, study facilitators and students responded to delivering or completing the intervention. How these factors relate to trial effectiveness outcomes will also be assessed. Overall synthesis of the data will provide school cluster-level and individual-level process outcomes.

Ethics and dissemination Ethics approval was obtained from the University of New South Wales (NSW) Human Research Ethics Committee (HC180836; 21st January 2019) and the NSW Government State Education Research Applications Process (SERAP 2019201; 19th August 2019). Results will be submitted for publication in peer-reviewed journals and discussed at conferences. Our process evaluation will contextualise the trial findings with respect to how the intervention may have worked in some schools but not in others. This evaluation will inform the development of a model for rolling out digital interventions for the prevention of mental illness in schools.

  • child & adolescent psychiatry
  • depression & mood disorders
  • preventive medicine
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • The methodology of this embedded process evaluation is underpinned by implementation frameworks and logic modelling.

  • Flexible and pragmatic quantitative and qualitative data collection methods will be used to balance research rigour with feasibility within the school delivery context.

  • Process data from a range of key stakeholders will be collected, including school staff, study facilitators and students.

  • To minimise burden on schools, fidelity data from teachers and in-depth qualitative data from students will not be collected.

  • The methodology and study processes were pilot tested to ensure appropriateness within the school context.

Introduction

Embedded in randomised controlled trials (RCTs), process evaluations provide insight into reasons why interventions work in some contexts and not in others. These process evaluations can help to demystify the ‘black box’ of complex intervention trials by taking into account contextual factors, differences in ways the intervention is delivered and adaptations made for intervention delivery into a particular system.1 2 Contextual factors typically include features of the organisation or broader environment that influence the delivery of the intervention (eg, leadership, engagement, culture, political landscape). Considering the contribution of contextual factors is necessary to aid interpretation of trial outcomes, maximise the knowledge gained from trials, identify optimal delivery processes across different settings and inform broader dissemination efforts. In recognition of this need for process evaluations, the UK’s Medical Research Council (MRC) set out a framework that emphasises the value of these evaluations in order to capture both contextual and implementation factors associated with complex interventions.3

In line with best practice recommendations that study protocols that prespecify methods and approaches should be published to maintain research integrity,4 we describe a protocol for a mixed-methods process evaluation embedded within a cluster RCT (cRCT) known as the Future Proofing Study (FPS). The FPS is a large school-based trial examining whether depression can be prevented using cognitive–behavioural therapy (CBT) delivered by smartphone application.5

There has been an increase in the availability of digital mental health programmes which show promise in addressing the significant disease burden associated with depression.6 Depression often first emerges during adolescence7 and treatment alone cannot adequately reduce this burden.8 Accumulating evidence indicates that early adolescence is the ideal developmental window during which to intervene because it captures young people before the incidence of depression increases exponentially around the age of 16–17 years.9 10 Therefore, increased efforts are being directed toward prevention approaches that are developed specifically for delivery among adolescents.

While there is evidence supporting the value and effectiveness of prevention programmes in schools,11 12 their uptake has been significantly limited by low levels of help-seeking13 as well as practical constraints in cost and scalability. A lack of understanding about how to deliver these interventions sustainably at scale, and engage those who stand to benefit, have hampered the translation of prevention approaches into the community.

Two ways to overcome the barriers of cost and scalability of implementing adolescent depression interventions are:

  • Delivering universal prevention programmes in schools.

  • Using technology to deliver programmes, which tends to be lower cost than traditional face-to-face methods.14

Working with schools to deliver universal prevention programmes to every student, regardless of their risk, circumstance or symptom profile, not only dramatically increases the potential reach of interventions, it also means that these programmes can eventually be integrated into the curriculum and delivered to every student, making this approach a sustainable implementation strategy for widescale delivery and dissemination. Additionally, working with schools to deliver such programmes reduces the need for young people to actively seek professional help. This is critically important—despite having a significant need, fewer than 80% of young people with mental illness seek help and receive the services they need.15 16 The scaffolding provided by schools can be leveraged to deliver prevention programmes to all students, which fits with a trend for schools to be designated the first point of contact to support youth mental health problems when they first emerge.9 Process evaluations of face-to-face mental health programmes have been documented, some of which have been delivered in school settings (eg, 17). However, we were unable to find any process evaluation descriptions in the literature that evaluate digital mental health programmes in school settings. Conducting process evaluations specific to digital methods of delivery is important because the contextual barriers and facilitators, as well as concerns around fidelity of the intervention, are likely to have unique characteristics. For example, facilitator or training in intervention delivery will be different in supporting the use of an automated programme relative to a face-to-face programme. This process evaluation is an initial step towards addressing that gap.

Technology offers a promising way to deliver mental health interventions to the community. Digital mental health interventions offer two key advantages over face-to-face approaches—first, they are cheaper to access and more cost-effective,14 18 and second, they can be used to reach people across vast geographical areas. The latter is particularly important given Australia’s geography, where remoteness and low population density meant that people in regional and rural areas often do not have the same level of access to mental health services as people in metropolitan areas. With more young people than ever using smartphones, mental health intervention delivered online or through applications represents an exciting avenue for reaching adolescents.

The FPS

The trial that is the subject of this process evaluation is the FPS. The FPS addresses barriers of reach, cost and scalability by delivering a depression prevention programme via a smartphone app to year 8 secondary school students aged 12–13 years. This study is being conducted with approximately 200 Australian schools (up to 10 000 participants), of which half will be allocated to the intervention condition. The primary outcome is symptoms of depression. Secondary outcomes include anxiety, psychological distress and insomnia, among others. Symptom outcomes will be assessed at baseline, post-intervention, 6 months (primary outcome only) and then annually for 5 years. The primary endpoint is 12 months following baseline. More details are available from the Australian New Zealand Clinical Trials Registry and protocol paper.5

The intervention being delivered is known as SPARX, a CBT-based programme incorporating gamification principles. The development and initial evaluation of SPARX have been documented in detail previously.19 SPARX has been tested in an Australian sample of secondary students (delivered via computer) and shown to prevent depression in the lead up to final school examinations.20 The gamified intervention teaches young people about the relationship between thoughts, feelings and behaviour. Skills learnt through SPARX include emotion identification, emotion regulation, behavioural activation (being active), recognising and challenging unhelpful thoughts, and practical problem-solving. SPARX consists of seven 20-minute modules. The intervention is fully automated, and the therapeutic components are standardised. See the trial protocol for full intervention details.5

School engagement and recruitment

The FPS requires multiple levels of approval, beginning with state department and independent school body approval (New South Wales (NSW) Department of Education; Catholic school dioceses), followed by individual school engagement. An engagement strategy for this study involves sending electronic communication material to all schools across NSW and in other Australian capital cities, targeting school principals and well-being staff to invite them to participate in this study opportunity. Schools are invited to submit expressions of interest and are subsequently followed up by the research team over the phone to explain more about the study. The schools for which the FPS is a good fit (as determined by the school) are then signed onto the study, with support from the principal, the school counsellor and at least one other staff member (typically a teacher). In line with best practice in implementation science,21 this group of 2–3 staff members (typically not including the principal) will form the school-specific ‘study implementation team’. After signing on to the study, several webinars are scheduled throughout the lead up to the study start date, so that school staff and parents can listen to a 15-minute study overview from the trial manager and have their questions answered.

Preparation

In preparation for in-class assessment sessions, study facilitators (volunteer research assistants) are recruited to support the study. The purpose of these facilitators is to attend schools to introduce the study to students, and ensure the technology is functioning so students can download the SPARX app and complete the baseline (and post-assessment) questionnaires. All facilitators go through an interview and screening process prior to selection, then attend a half day face-to-face training session before supporting the study in schools. This training provides an overview of the study and detailed information about their roles within schools, including a step-by-step guide to running the sessions. Discussion and practice are core components of the training.

Aims and objectives

We adapted the MRC framework for complex interventions to focus on effectiveness of an evidence-based intervention within a specific context of delivery (ie, schools). Overall, the objective of this process evaluation is to understand how SPARX is implemented and delivered in schools, and to identify systematic differences and variation in delivery. Specifically, the aims are:

  1. To evaluate the reach (including completion), uptake and acceptability of the intervention (school and student level).

  2. To understand the contribution of contextual factors (eg, characteristics of the outer/inner setting, intervention, individuals) on:

    • School-level fidelity to the implementation strategy. For example, different schools will likely provide different levels of study support based on available resourcing.

    • Implementation outcomes (intervention reach, uptake, acceptability), as assessed from the perspectives of school staff, teachers and students. For example, young people’s openness to receiving mental health material via an app will likely impact intervention acceptability and completion.

  3. To examine the impact of school-level variation (in implementation fidelity and outcomes) on clinical effectiveness outcomes at the school and student level. School-level clinical effectiveness is defined as changes in clinical outcomes (eg, self-reported depression) for different schools. The differing ways in which schools support and deliver the intervention will inevitably impact its effectiveness, and this evaluation will assess these differences.3 22 For example, degree of support from senior school leaders who would be expected to understand that participating in study activities is a priority for the school will likely impact effectiveness at the cluster level.

This process evaluation has been designed to capture important information from teachers, school staff and students at both the school and individual level, which may ultimately impact the effectiveness of the intervention on clinical mental health outcomes for students. Findings will provide insight into factors which support and/or hinder the implementation of digital universal mental health programmes in school settings. Knowledge gained from this process evaluation will help to inform the development of a model and guide for how to best deliver digital mental health programmes to young people in schools.

Methods and analysis

Design

This study uses a hybrid type 1 approach,23 with a focus on implementation process factors and outcomes in the context of an effectiveness trial. The evaluation is guided by the Consolidated Framework for Implementation Research (CFIR) and RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) framework.24 25 In keeping with Nilsen’s categorisation of implementation theories, models and frameworks, these frameworks help us to understand different parts of the implementation process.26 The CFIR will be used to identify barriers and facilitators to intervention implementation and effectiveness.26 The CFIR is a widely used deterministic theoretical framework which has been applied in multiple settings,27 including schools (eg, 28). The CFIR identifies five major domains, including (1) the outer setting, which includes the social, political and economic context that the organisation in which implementation is occurring exists; (2) the inner setting, which includes features and characteristics of the organisation such as leadership and relative priority; (3) the characteristics of individuals, which include organisational staff knowledge and attitudes about the intervention, and their role and identification within the wider organisation; (4) the characteristics of the intervention itself and (5) implementation processes, which include the ways that the intervention will be delivered in a given context (including fidelity to the implementation strategy). Normalisation Process Theory (NPT) will be used to provide additional insights into implementation processes.29 NPT aims to identify and explain the implementation of new interventions and how they become integrated into routine care.

The RE-AIM framework will be used to evaluate the implementation outcomes, including intervention reach, uptake and acceptability/appropriateness.24 30 These outcomes also map onto the framework of implementation outcomes proposed by Proctor et al.31 32 Reach refers to the proportion of eligible participants who opened, used and completed the intervention, as well as the proportion of students from the entire cohort of eligible students in intervention schools who consented to participate. Uptake refers to the proportion of schools that were onboarded to the study (intervention and control arms), and of school staff who were willing to support the delivery of the intervention (intervention arm only). Reach and uptake also incorporate the representativeness of the sample (school level and individual level). Acceptability and appropriateness refer to the perceived agreeableness or fit of the intervention. This evaluation will incorporate how the barriers and facilitators identified through the CFIR impacted the implementation outcomes, and, in turn, how these implementation outcomes impacted effectiveness outcomes.

Logic model

Following MRC guidance,3 the research team developed a logic model for the FPS process evaluation in a series of participatory workshops. The logic model was prospectively informed by key CFIR constructs identified in previous literature as being important in school-based studies, but also feasible and appropriate to measure within the school context in the FPS. The key constructs included the outer setting, inner setting, individual characteristics and intervention characteristics. The logic model (figure 1) was developed to consider the key factors (mapped to the CFIR) that would potentially impact implementation of the intervention (mapped to RE-AIM), as well as its clinical effectiveness. The process evaluation methods, including the selection of dependent variables and design of surveys and semistructured interview guides, were derived from this logic model. See table 1 for CFIR/RE-AIM domains, key research questions, process data and data that will be collected within each domain.

Figure 1

Logic model. The model shows that CFIR constructs, including school context characteristics, school organisational characteristics and individual characteristics, will influence how staff engage with the implementation strategy. The intervention itself, which includes the core cognitive–behavioural therapeutic components, is conceptualised as standardised across individuals because it is delivered digitally, follows a fixed schedule and does not incorporate tailored content. The yellow input factors are expected to vary across schools and individuals, thus influencing engagement and flexibility of the implementation strategy and in turn, implementation outcomes and student-level outcomes. The logic model and implementation plan were externally peer reviewed by an experienced and internationally recognised implementation scientist outside the team within an implementation workshop. For details on assessment of these factors, see table 1. CFIR, Consolidated Framework for Implementation Research; FPS, Future Proofing Study.

Table 1

Process evaluation details including process data, outcome data, data type and source

A pilot study of eight schools conducted in 2019 was used to assess the suitability of the planned implementation strategy and process evaluation methods. We retrospectively applied our logic model to collected data to evaluate whether our methodology captured relevant constructs and sufficient variation in these constructs. We integrated learning from this pilot study using a dynamic feedback process to strengthen our methodology for the full-scale FPS phase. Some minor changes were made following this pilot which relate to the CFIR, including a greater emphasis on subconstructs such as school climate (inner setting).

Figure 2

Details of implementation strategy training and delivery structure.

School implementation strategy

The school implementation strategy was developed by the study authors. The authors drew on their experience in school-based intervention delivery and integrated feedback from teachers and school staff from several completed school-based trials that delivered digital interventions to school students.20 33 Stakeholder consultation specifically for this study involved discussions with the Department of Education, consultation with several school parent committees and consultation with both youth and parent Lived Experience Advisory Panels. This strategy was also refined following the first pilot wave involving eight intervention schools.

The school ‘study implementation teams’ are principally responsible for the implementation of the intervention and liaising with the research team. As described earlier, these teams typically incorporate at least one classroom teacher and one school counsellor to assist with the intervention delivery. Study facilitators also support the delivery of the SPARX intervention by attending schools for the first school session.

Implementation strategy

During the active intervention phase, schools allocate a minimum of 4×20 min school class sessions during which students complete the SPARX intervention. The additional three sessions may be completed either in class if permitted by the schools, or in the students’ own time. The implementation strategy developed by the project team and stakeholders to support the completion of the SPARX intervention comprises:

  • Standardised facilitator training delivered face-to-face over a half day.

  • Study facilitators are present at schools to support students in downloading the SPARX app and completing the baseline assessments.

  • School implementation team provided with a SPARX user guide and information booklet they can refer to during the sessions.

  • Schools provide students with weekly verbal reminders in homeroom class to use the app regularly.

  • Schools publish brief information about the study and mental health tips in the school weekly newsletter.

  • Schools liaise with research team weekly to troubleshoot problems.

This is the strategy being outlined to schools by the research team and adherence to this strategy will be assessed. Whether or not schools schedule more than the four mandated in-class sessions for intervention completion is flexible and can be adapted to suit the preferences of the school. Student participants receive a $A20 voucher after the intervention period to cover any phone or data-related costs incurred. See figure 2 for details of training and delivery structure.

Data collection methods and participant groups for process evaluation

There are three participant groups taking part in the process evaluation: year 8 students, school staff members (eg, teaching and counselling staff) and study facilitators (see table 2). School staff members will be members who are responsible for leading the delivery of the study in their school or will be teaching staff who have a supporting role (eg, homeroom teachers who provide reminders to students to complete the intervention).

Table 2

Summary of data forms (and collection point) provided by each of the participant groups

Four types of data will be collected and triangulated: self-report questionnaire data, digital analytic data, administrative data and qualitative interview data. Self-report questionnaire data specific to the process evaluation will be collected from staff and study facilitators using online survey software (Qualtrics) programmed by one of the authors (JRB). These questionnaires assess demographic information, school organisational characteristics, individual characteristics, intervention characteristics, implementation processes and implementation outcomes. Where no available published questionnaires were identified as being suitable for this process evaluation, we adapted existing standardised measures or developed our own items (details below). Self-report questionnaire data from year 8 students about intervention use and feedback will be collected from an online survey (details described in 5). Digital analytic data about intervention use will be captured by the purpose-built Black Dog Institute research platform, which is being used for the broader FPS. Administrative data will be collected through a range of sources, including communications with the research team. Qualitative data will be collected using semistructured interviews. All data collected will be from intervention schools only as no intervention is implemented in control schools. Student and school staff data will be collected immediately after the intervention period has been completed (ie, after the 6-week intervention stage); facilitator data will be collected both before and after the intervention period.

Implementation predictors

School organisational characteristics (inner setting)

Publicly available information

Publicly available information will be collected about school contextual characteristics, including socioeconomic level, size, location, type and funding.

General school culture

School staff will complete the Survey of School Promotion of Emotional and Social Health (SSPESH34) and a measure of organisational culture.35 The SSPESH assesses a school’s capacity to promote social and emotional well-being, and contains four subscales: Positive School Community, Student Social and Emotional Learning, Engaging Families and Supporting Students Experiencing Mental Health Difficulties. Items are rated on a 4-point Likert scale from 0 (not yet in place) to 3 (completely in place), and preliminary investigations support the scale structure and criterion-related validity.34

We adapted a 9-item questionnaire about general culture within a healthcare setting to use within the school setting.35 Items are rated on a 5-point Likert scale from 1 (strongly agree) to 5 (strongly disagree). Lower scores indicate a more positive working culture, which includes transparency, productive working relationships and receptivity to feedback. Previous investigation has shown that this measure has good internal consistency, although overlaps somewhat with other subconstructs within the inner setting (eg, learning climate34).

Implementation climate

School staff will complete a measure of implementation climate that we adapted for use in the school context.35 Items are rated on a 5-point Likert scale from 1 (strongly agree) to 5 (strongly disagree). Lower scores indicate increased staff receptivity to the programme and support within the school, including rewards and recognition. This measure has acceptable internal consistency and good discriminant validity.35

Relative priority

School staff will complete one adapted item from the School Contextual Barriers subscale of the Perceived Attributes of the Healthy Schools Approach Scale.36 The item is rated on a 5-point scale Likert scale from 1 (strongly agree) to 5 (strongly disagree). Higher scores indicate that other activities did not interfere with implementation of the FP programme.

Competing demands

Staff will complete three items, developed specifically for this study, which assess how much time they allocated, and desired to allocate, to the FP programme relative to other competing workload demands on visual analogue scales (anchor by 0=no work time, 100%=all of my work time). Higher scores indicate increased allocation of time and prioritisation of the FP programme.

Individual characteristics

Leadership

School staff will complete the Implementation Leadership Scale (ILS37). The ILS is a 12-item scale that assesses leadership behaviours that support implementation of evidence-based practices. The ILS contains four subscales, and two will be included in the current study (Knowledge Leadership and Supportive Leadership). These scales assess the degree to which the staff member was knowledgeable about and offered support to the programme. Items are rated on a 5-point Likert scale from 0 (very much so) to 4 (not at all) and will be recoded such that higher scores indicate more effective implementation leadership behaviours. This measure has excellent internal validity, convergent validity and discriminant validity.37

Self-efficacy and confidence

Study facilitators will rate their level of confidence in performing 23 different tasks during school visits on visual analogue scales, which are based on the training programme they completed (anchor by 0=not at all confident, 100=very confident). Higher scores indicate higher levels of confidence. A similar questionnaire will be repeated following their school visits, along with four short-answer questions that assess experiences with supporting staff and students during school visits.

Intervention characteristics

Relative advantage and evidence strength and quality

School staff will complete the 2-item Relative Advantages subscale and one item from the Anticipated Benefits subscale of the Perceived Attributes of the Healthy Schools Approach Scale.36 Adapted for the current study, items are rated on a 5-point scale Likert scale from 1 (strongly agree) to 5 (strongly disagree). Scores will be recoded such that higher scores indicate greater perceived advantage of the FP programme compared with others and greater perceived impacts on mental health, respectively.

Implementation processes

Normalisation and integration into routine practice

School staff will complete NPT’s accompanying tool, the Normalisation MeAsure Development questionnaire (NoMAD38). The NoMAD is a 23-item measure that assesses how professionals involved in the implementation of a complex intervention perceive implementation processes. It is a flexible measure that can be altered to more accurately describe the adoption of new interventions at the provider level. Ten items of the NoMAD will be used in this study to assess intervention buy-in from school staff, and how staff members incorporated the initiative into their standard work responsibilities. These items are grouped into three categories: coherence (ie, making sense of an intervention), cognitive participation (ie, working with others to support an intervention) and collective action (ie, the type of work that people do to support an intervention). Items are rated on a 5-point Likert scale from 1 (strongly agree) to 5 (strongly disagree). Initial validation demonstrated that the NoMAD has good face validity, construct validity and internal consistency.39

Fidelity to the implementation strategy

Fidelity will be captured through implementation checklists, communications between schools and research staff, and interviews with school staff.

Implementation outcomes

Reach

SPARX app usage data from students will allow for the assessment of app use (downloads, installs and opens), completion (number of modules completed, out of a total of seven) and time spent using SPARX. Administrative data about the number of students in intervention schools with consent to participate will also be used as an indicator of reach. Self-report data about individual characteristics (eg, gender, mental health history) will be used to gauge the representativeness of the students in the intervention schools.

Uptake

Administrative data about the proportion of schools that were onboarded to the study and the proportion of teachers who supported the intervention will be collected to provide an index of uptake by schools. Representativeness of the sample will be informed by publicly available information about school characteristics (eg, socioeconomic status, location) and self-report data about school staff characteristics (eg, role, gender).

Appropriateness

School staff will complete the Intervention Appropriateness Measure (IAM40). The IAM is a pragmatic 4-item measure of the perceived fit, relevance or compatibility of an evidence-based practice for a context, person or problem.32 Items have been adapted for this study and are rated on a 5-point Likert scale from 1 (strongly agree) to 5 (strongly disagree). Scores will be recoded such that higher scores indicate higher levels of perceived appropriateness. The IAM demonstrated strong psychometric properties in previous research.40 School staff will also complete one adapted item from the Agency Leadership Support subscale of the Barriers and Facilitators to Implementing Survey.41 The item is rated on a 5-point Likert scale from 1 (strongly agree) to 5 (strongly disagree) and will be recoded such that higher scores indicate greater compatibility of the FP programme within a particular school.

Acceptability

Year 8 students will complete an 11-item feedback questionnaire about SPARX. The questionnaire assesses three domains, including: (1) reasons for non-adherence; (2) intervention acceptability and (3) skills learnt from the intervention. Items will be quantified individually. This questionnaire has previously been used to assess the acceptability of the SPARX programme in the school context.42

Individual interviews

Interview guides for school staff and study facilitators were derived from the logic model, CFIR online resources (eg, https://cfirguide.org/evaluation-design/qualitative-data/), and the broader literature investigating the delivery of interventions in school settings43 44 (see online supplemental file 1 for interview guides). Interviews will provide information about both implementation predictors and outcomes. For school staff, questions focus on motivations and expectations about the intervention and study processes more broadly; knowledge and beliefs about the intervention; relative advantages of the intervention; self-efficacy; barriers and facilitators affecting the delivery of the intervention, including fidelity; appropriateness and acceptability of the intervention; and recommendations for future implementation. School counsellors will also be asked questions about their experiences of managing high-risk participants within the study and compatibility with existing workload. For study facilitators, questions focus on motivations and expectations about their role; confidence and competence in supporting the delivery of SPARX in schools; the quality of their training and perceptions about their ability to support the study as required.

Supplemental material

The interview guides allow for flexibility in questioning and diversion in responses. Questions will primarily be open-ended, with specific prompts and follow-up questions being used as necessary to encourage respondents to elaborate on their ideas and provide examples.

Patient and public involvement

The FPS was developed with key stakeholders including school personnel, school counsellors, parents, adolescents and individuals with a lived experience of mental illness. All aspects of the study design, appropriateness of outcome measures and consent procedures were developed in consultation with these stakeholder groups. The current process evaluation involves consultation with school staff members to understand their experience. The SPARX intervention itself was codesigned with young people. All study results will be shared directly with participants and their schools through lay summaries and infographics.

Procedure

The FPS has three waves of delivery (October–December 2020; April–July 2021; July–September 2021). Process evaluation data will be collected at each wave. All school students will complete relevant survey questions at the same time as completing primary measures for the FPS. All study facilitators will be asked to provide informed consent for their data to be used for research purposes following the compulsory face-to-face training session they attend with the research team at the Black Dog Institute. Data provided by facilitators will be from two surveys, one completed immediately following the training and another completed after their final school visit. All school staff from intervention schools (on average, three from each school, estimated number of intervention schools=100) who were directly involved in the study will be invited to complete one survey following the 6-week SPARX intervention period. All surveys will be completed online.

After completing their respective online questionnaires, all school staff and study facilitators will be given the option to participate in a 60-minute individual interview (completed either in person or remotely). Purposive sampling will be used to capture a range of diverse school settings and experiences. Face-to-face interviews will be held in a quiet room on school grounds or at the Black Dog Institute. Virtual interviews, which may be required due to COVID-19 restrictions, will also be conducted. All interviews will be audio-recorded and transcribed verbatim. All contact with study facilitators and school staff, including the semistructured interviews, will be made by research staff who have had no previous contact with them during the trial. This independence minimises the risk of bias and demand effects.

Data analysis plan

Quantitative

Survey questionnaire data (from approximately 100 schools) will be exported into data analytical software for analysis. Descriptive statistics will be calculated for all participant groups and will provide information about intervention use and its acceptability (questionnaire data from year 8 participants), differential implementation, fidelity to the implementation strategy, school context factors within each school (questionnaire including several short answer questions from school staff), and competence and experience in schools from those facilitating the study (facilitator questionnaires). Differences between school clusters will be assessed using analysis of variance methods. Regression models will assess the effects of contextual factors on implementation outcomes (eg, reach, uptake, acceptability).

Qualitative

Interviews will be digitally audio-recorded and transcribed verbatim. The transcripts will be checked for accuracy against the sound files as per best practice in transcription.45 46 Qualitative data will then be imported into NVivo to aid in data management and analysis. Thematic analysis will be undertaken to identify, interpret and report on the repeated patterns of meaning within the data, drawing from Braun and Clark’s classic six-phase model.47 48 An iterative and reflexive approach will be used to analyse the data, incorporating themes from the data together with topics covered in the interview guide. Two coders will independently engage in a familiarisation phase before generating codes and initial themes for a subset of the data. These codes and themes will be reviewed and discussed by the two coders, with refinement occurring via an interactive process. A senior qualitative analyst will also review the first-stage coding framework and scheme before all transcripts are coded by the first coder. Refinement will continue to occur via an interactive process until final codes and themes are realised and defined across the whole data set. Research rigour will be enhanced by a team approach to analysis, reflexive field notes and prolonged engagement with the subject matter.49

Triangulation of qualitative and quantitative data

Triangulation involves the use of multiple approaches to address a research question. The combination of several approaches increases confidence in the findings and provides a more comprehensive account of the results than individual approaches would do alone.50 In this study, reliability, validity and confidence will be maximised through cross-verification and exploration of differences between the outcomes of the various methods. This takes place in several ways:

  • Maximising validity in analysis of qualitative data within the research team using techniques such as discussing coding, constant comparison, accounting for deviant cases and systematic coding.

  • Triangulation of school staff and research assistant interviews with results from the questionnaires, exploring and accounting for differences.

  • Triangulation of self-report and interview data with publicly available information relating to school contextual characteristics (eg, school socioeconomic level and size) and school delivery of the programme, including deviations to the implementation strategy.

  • Mapping the perspectives of different stakeholders across the study (school staff, study facilitators).

Additional analyses

We will generate an ‘implementation strength’ metric for each school and its relationship to primary and secondary trial outcomes. This is an emerging evaluation approach, used mainly in low-income and middle-income countries. The approach aims to understand the degree of implementation effort needed during intervention delivery to achieve desired benefits.51 52

The implementation strength metric will provide funders and policymakers with an objective measure to monitor effectiveness of implementation if it goes beyond this trial and becomes a sustainable approach. The metric can be used to assess whether the approach to implementation meets the minimum level required to prevent the onset of mental health problems in adolescents. The metric will be based on implementation inputs and contextual factors informed by the intervention logic model and process evaluation frameworks (CFIR and RE-AIM) (eg, adoption from teachers and other staff in the school and fidelity to the intervention strategy), and will be developed with the FP research team using principal component analysis.

Ethics and dissemination

Ethics approval was obtained from the University of NSW Human Research Ethics Committee (HC180836; 21st January 2019) and the NSW Government State Education Research Applications Process (SERAP 2019201; 19th August 2019). Results will be submitted for publication in peer-reviewed journals and discussed at conferences. Our process evaluation will contextualise the trial findings with respect to how the intervention may have worked in some schools but not in others. This evaluation will inform the development of a model for rolling out digital interventions for the prevention of mental illness in schools.

Discussion

This paper describes the design of a mixed-methods process evaluation of a cRCT, the FPS. The FPS investigates the impact of a digital cognitive–behavioural therapy intervention when delivered at scale in school settings. Digital mental health programmes have tremendous potential to prevent up to 22% of depression cases and, when delivered at scale, could have population-level impacts.53 However, these programmes have not been translated into practice and policy because optimal ways to scale and deliver these interventions are not yet well understood.

As an initial step to address this issue, the current process evaluation will attend to contextual and implementation factors that vary across schools and provide a lens through which to interpret trial efficacy outcomes. We expect that results will provide a richly detailed and nuanced understanding of the key factors involved in the effective delivery of digital mental health programmes across different schools. We expect that results will not only contextualise our trial findings but will also be used as a model to guide the delivery of school-based interventions that focus on preventing mental illness more broadly. Findings from this process evaluation will indicate whether the approach used in the FPS trial is likely to be sustainable in the school environment going forward and, if so, the threshold level of support required in order to prevent depression and benefit student mental health.

The prospective publication of this protocol outlines our planned methodological approach. It also serves as a road map for other researchers on a practical way of carrying out process evaluations of complex interventions in the school setting. As is the case with the delivery of interventions across different contexts, we acknowledge that our approach has inbuilt flexibility to explore the data and make provisions for unexpected implementation factors that arise.

Limitations and strengths

There are several limitations to our process evaluation that warrant mention. First, we are not including direct observation of teachers in their role supporting the delivery of SPARX. While this would provide objective fidelity data, it requires resources beyond the scope of this project and is not representative of how the programme will be sustained following the conclusion of the trial. Second, given the complexity of the study and high demand placed on students (eg, engaging with the study apps and completing the online surveys at multiple time points over 5 years), we are not collecting in-depth qualitative data by way of interviews. Instead, we collect information about students’ perceptions of the intervention (eg, acceptability) through short self-report questions in the online survey. Second, the qualitative interviewer is a member of the research team (but not the evaluation team). Care will be taken to ensure that this staff member has no contact with schools prior to the interview visit to minimise bias. However, there remains a risk that demand effects may impact the information that is shared. Third, the process evaluation process (questionnaires and interview) will undoubtedly add to the burden placed on school staff. Given that the FPS is already placing a significant burden on the time of busy school staff, this additional component might contribute to low levels of participation.

To the point of burden on schools, one of the strengths of the design is that we have undertaken a pilot phase involving eight intervention schools and have been able to refine our processes (eg, introduce an incentive) for school staff members to participate. This process evaluation also involves the combination of qualitative and quantitative methods which will be triangulated to provide a coherent and comprehensive picture of the data. The use of the ‘implementation strength’ metric represents a novel approach in this field, borrowed from the low–middle-income country implementation science sector. The inclusion of this approach will provide important information to funders and policymakers following on from this trial, indicating the level of implementation support required to prevent mental illness and improve well-being of adolescent school students.

Notwithstanding the limitations raised above, this process evaluation will contribute to the broader knowledge base and indicate how best to deliver digital mental health prevention programmes in school settings.

Trial status

Recruitment for the trial is underway. Data collection commences in October 2020 (delayed from April 2020 due to COVID-19).

Acknowledgments

We would like to thank Professor Melanie Barwick for her advice on this manuscript, together with all the facilitators from the Training Institute for Dissemination and Implementation Research in Health (TIDIRH; Australia 2020) for their comments and feedback on this project.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Twitter @ACalear, @alizaws

  • Contributors HC and AW-S conceived the study and secured the funding. AW-S and JRB led the design of the process evaluation, with input from all authors (including KH, IZ and PJB), and expert guidance from RL and KB. AW-S drafted the manuscript, with assistance from JRB. ALC, MT, KM, RL, KB and HC have a continuing role in monitoring the conduct and outcomes of the process evaluation. All named authors contributed substantially to the approved final manuscript.

  • Funding Funding for this project came from an NSW Ministry of Health Early-Mid Career Fellowship awarded to AW-S, and a Black Dog Institute Post-Doctoral Fellowship awarded to JRB, secured by HC. ALC is supported by NHMRC fellowships 1122544 and 1173146. PJB is supported by NHMRC Fellowship 1158707. Funding for the randomised controlled trial within which this process evaluation is embedded came from an NHMRC Project Grant Awarded to HC GNT1120646.

  • Disclaimer The funding bodies had no role in any aspect of the study design or this manuscript.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.