ERIC Number: ED663342
Record Type: Non-Journal
Publication Date: 2024-Sep-20
Pages: N/A
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
A SMART Design to Optimize Multi-Level Adaptive Interventions
Timothy Lycurgus; Daniel Almirall
Society for Research on Educational Effectiveness
Background: Education scientists are increasingly interested in constructing interventions that are adaptive over time to suit the evolving needs of students, classrooms, or schools. Such "adaptive interventions" (also referred to as dynamic treatment regimens or dynamic instructional regimes) determine which treatment should be offered to a student at any given stage of an intervention based on the needs of the students (Raudenbush, 2008). To optimize an adaptive intervention, researchers may construct a sequential, multiple-assignment randomized trial, or SMART (Murphy, 2005). This is a form of factorial design (Murphy and Bingham, 2009) where some or all participants are randomized multiple times to one or more interventions at critical decision points (Almirall et al., 2018). SMARTs have been extended to clustered settings where clusters (e.g., schools or classrooms) rather than individuals are randomized to interventions. This has increased their applicability in education where clustering is inherent. But SMARTs are yet to be extended to multi-level settings, where initial intervention is provided at the cluster-level and subsequent intervention is provided at a lower-level. Purpose: We introduce statisticians and methodologists in education to adaptive interventions and SMARTs. Next, we provide education scientists with methods that will allow them to optimize adaptive interventions where treatment is provided at multiple levels (e.g., a classroom-level intervention followed by a student-level intervention). We demonstrate these methods through a comprehensive simulation study. Adaptive Interventions and SMARTs: An adaptive intervention (AI) is a pre-specified set of decision rules that guide how best to serve the baseline and ongoing needs of individuals (Seewald et al., 2020). These rules tailor the provision of treatment to the specific needs of individuals or clusters at critical points throughout the intervention. Typically, such intervention occurs at the same level throughout the course of treatment: a clustered AI intervenes on the entire cluster during each stage. In this paper, we propose multi-level adaptive interventions which allow for different levels of treatment at various stages. One such multi-level AI may initially recommend a classroom-level intervention and then recommend student-level interventions, tailored to the needs of individual students, in subsequent stages. This has the potential to reduce costs and improve outcomes because each student receives the proper intervention (either in type or intensity) for their individual needs. There are many questions education scientists may ask when optimizing a multi-level AI. For example, it is natural to ask "What initial classroom-level intervention should be provided to best improve outcomes for students within the classroom?" or "What composition of individual-level interventions should be provided to students during the second-stage to best induce positive spillover effects?" Multi-level SMARTs can be used to answer such optimization questions. But because units within a given cluster are assigned to disparate second-stage interventions, this scenario is rife for contamination across intervention arms. As such, we require new strategies when designing and analyzing multi-level SMARTs to ensure that these spillover effects do not bias our estimates. Multi-Level SMARTs: We briefly present two approaches for designing and analyzing multi-level SMARTs. The first strategy adjusts cluster SMARTs at the design-stage; the second strategy uses assumptions about the form of the spillover effect during the analysis phase of the study. A Design-Based Strategy: The design-based approach follows a strategy similar to that in Hudgens and Halloran (2008). We add an initial cluster-level randomization to various second-stage randomization probabilities. For instance, half of clusters may be assigned to a 50%/50% second-stage composition (i.e., 50% of students will be randomly assigned to each of the second-stage interventions). The other half of clusters may be assigned to a 75%/25% second-stage composition (i.e., 75% of students are randomly assigned to one of the second-stage interventions). Then firstand second-stage randomizations proceed as is typical in SMART settings, where the second-stage randomization occurs at the unit-level using the probability from the randomization composition. This approach allows us to obtain unbiased effect estimates of each of the estimands typical in SMARTs (e.g., effect of the first-stage or second-stage intervention) as well as an unbiased estimate of the overall effect for the cluster under the given first-stage intervention and randomization composition. In the above example, we would have estimates of the effect under a 50%/50% second-stage composition and under a 75%/25% second-stage composition. These estimands implicitly incorporate spillover across intervention arms and thus, cannot be generalized to any other second-stage composition. To obtain these estimates, we solve an estimating equation as in NeCamp et al. (2017). An Assumption-Based Strategy: The assumption-based approach addresses spillover across intervention arms during the analysis stage. As such, their design is more straightforward: there are only two randomizations and each randomization occurs with equal probability. In the analysis stage, we make an assumption about the form of the spillover (e.g., spillover increases linearly in the share of the cluster that receives the alternative intervention). We then include terms that estimate that spillover effect in our estimating equation. This provides effect estimates for each of our estimands including the overall effect under all possible second-stage randomization compositions. These estimates will be unbiased so long as the assumption about the form of the spillover is correct. Simulations: We present a simulation study where initial intervention is provided at the cluster-level and subsequent intervention is provided at the unit-level. We compare the bias and root mean-squared-errors (RMSE) for the design and assumption-based approaches under a variety of forms of spillover effects, cluster sizes, and numbers of clusters. Results: Results are presented in Table 1. The design-based approach provides unbiased effect estimates for all randomization compositions under each form of spillover. The assumption-based approach provides unbiased effect estimates when the assumption of a linear spillover effect is correct. When that assumption is incorrect, effects are biased. The design-based approach provides lower RMSEs further from a 0.5 randomization composition; the assumption-based strategy provides lower RMSEs closer to 0.5. Conclusion: We introduced adaptive interventions and SMARTs to education scientists. We proposed two methods that allow for the optimization of multi-level AIs. We showed conditions under which each of the strategies are superior through a comprehensive simulation study.
Descriptors: Educational Research, Research Design, Randomized Controlled Trials, Intervention, Hierarchical Linear Modeling, Statistical Bias, Error of Measurement, Simulation
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; e-mail: contact@sree.org; Web site: https://www.sree.org/
Publication Type: Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Grant or Contract Numbers: N/A