NotesFAQContact Us
Collection
Advanced
Search Tips
Source
American Journal of Evaluation58
Audience
What Works Clearinghouse Rating
Showing 1 to 15 of 58 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cox, Kyle; Kelcey, Benjamin – American Journal of Evaluation, 2023
Analysis of the differential treatment effects across targeted subgroups and contexts is a critical objective in many evaluations because it delineates for whom and under what conditions particular programs, therapies or treatments are effective. Unfortunately, it is unclear how to plan efficient and effective evaluations that include these…
Descriptors: Statistical Analysis, Research Design, Cluster Grouping, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Tipton, Elizabeth – American Journal of Evaluation, 2022
Practitioners and policymakers often want estimates of the effect of an intervention for their local community, e.g., region, state, county. In the ideal, these multiple population average treatment effect (ATE) estimates will be considered in the design of a single randomized trial. Methods for sample selection for generalizing the sample ATE to…
Descriptors: Sampling, Sample Size, Selection, Randomized Controlled Trials
E. C. Hedberg – American Journal of Evaluation, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Bell, Stephen H.; Stapleton, David C.; Wood, Michelle; Gubits, Daniel – American Journal of Evaluation, 2023
A randomized experiment that measures the impact of a social policy in a sample of the population reveals whether the policy will work on average with universal application. An experiment that includes only the subset of the population that volunteers for the intervention generates narrower "proof-of-concept" evidence of whether the…
Descriptors: Public Policy, Policy Formation, Federal Programs, Social Services
Peer reviewed Peer reviewed
Direct linkDirect link
Bower, Kyle L. – American Journal of Evaluation, 2022
The purpose of this paper is to introduce the Five-Level Qualitative Data Analysis (5LQDA) method for ATLAS.ti as a way to intentionally design methodological approaches applicable to the field of evaluation. To demonstrate my analytical process using ATLAS.ti, I use examples from an existing evaluation of a STEM Peer Learning Assistant program.…
Descriptors: Qualitative Research, Data Analysis, Program Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Stapleton, David C.; Bell, Stephen H.; Hoffman, Denise; Wood, Michelle – American Journal of Evaluation, 2020
The Benefit Offset National Demonstration (BOND) tested a $1 reduction in benefits per $2 earnings increase above the level at which Social Security Disability Insurance benefits drop from full to zero under current law. BOND included a rare and large "population-representative" experiment: It applied the rule to a nationwide, random…
Descriptors: Federal Programs, Public Policy, Experiments, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ledford, Jennifer R. – American Journal of Evaluation, 2018
Randomization of large number of participants to different treatment groups is often not a feasible or preferable way to answer questions of immediate interest to professional practice. Single case designs (SCDs) are a class of research designs that are experimental in nature but require only a few participants, all of whom receive the…
Descriptors: Research Design, Randomized Controlled Trials, Experimental Groups, Control Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Whitesell, Nancy Rumbaugh; Sarche, Michelle; Keane, Ellen; Mousseau, Alicia C.; Kaufman, Carol E. – American Journal of Evaluation, 2018
Evidence-based interventions hold promise for reducing gaps in health equity across diverse populations, but evidence about effectiveness within these populations lags behind the mainstream, often leaving opportunities to fulfill this promise unrealized. Mismatch between standard intervention outcomes research methods and the cultural and…
Descriptors: Scientific Methodology, Cultural Context, Health Promotion, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Zandniapour, Lily; Deterding, Nicole M. – American Journal of Evaluation, 2018
Tiered evidence initiatives are an important federal strategy to incentivize and accelerate the use of rigorous evidence in planning, implementing, and assessing social service investments. The Social Innovation Fund (SIF), a program of the Corporation for National and Community Service, adopted a public-private partnership approach to tiered…
Descriptors: Program Effectiveness, Program Evaluation, Research Needs, Evidence
Peer reviewed Peer reviewed
Direct linkDirect link
Keele, Luke – American Journal of Evaluation, 2015
In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…
Descriptors: Mediation Theory, Causal Models, Inferences, Path Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Louie, Josephine; Rhoads, Christopher; Mark, June – American Journal of Evaluation, 2016
Interest in the regression discontinuity (RD) design as an alternative to randomized control trials (RCTs) has grown in recent years. There is little practical guidance, however, on conditions that would lead to a successful RD evaluation or the utility of studies with underpowered RD designs. This article describes the use of RD design to…
Descriptors: Regression (Statistics), Program Evaluation, Algebra, Supplementary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Kidwell, Kelley M.; Hyde, Luke W. – American Journal of Evaluation, 2016
Heterogeneity between and within people necessitates the need for sequential personalized interventions to optimize individual outcomes. Personalized or adaptive interventions (AIs) are relevant for diseases and maladaptive behavioral trajectories when one intervention is not curative and success of a subsequent intervention may depend on…
Descriptors: Intervention, Individualized Programs, Child Behavior, Behavior Problems
Westlund, Erik; Stuart, Elizabeth A. – American Journal of Evaluation, 2017
This article discusses the nonuse, misuse, and proper use of pilot studies in experimental evaluation research. The authors first show that there is little theoretical, practical, or empirical guidance available to researchers who seek to incorporate pilot studies into experimental evaluation research designs. The authors then discuss how pilot…
Descriptors: Use Studies, Pilot Projects, Evaluation Research, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Solmeyer, Anna R.; Constance, Nicole – American Journal of Evaluation, 2015
Traditionally, evaluation has primarily tried to answer the question "Does a program, service, or policy work?" Recently, more attention is given to questions about variation in program effects and the mechanisms through which program effects occur. Addressing these kinds of questions requires moving beyond assessing average program…
Descriptors: Program Effectiveness, Program Evaluation, Program Content, Measurement Techniques
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4