Publication Date
In 2025 | 1 |
Since 2024 | 11 |
Since 2021 (last 5 years) | 45 |
Descriptor
Source
Author
Kaitlyn G. Fitzgerald | 3 |
Bulus, Metin | 2 |
E. C. Hedberg | 2 |
Elizabeth Tipton | 2 |
Larry V. Hedges | 2 |
Prathiba Natesan Batley | 2 |
Ames, Allison | 1 |
Anders, Jake | 1 |
Becker, Betsy Jane | 1 |
Ben B. Hansen | 1 |
Benjamin Kelcey | 1 |
More ▼ |
Publication Type
Journal Articles | 31 |
Reports - Research | 29 |
Reports - Descriptive | 6 |
Information Analyses | 4 |
Reports - Evaluative | 4 |
Dissertations/Theses -… | 1 |
Guides - Non-Classroom | 1 |
Tests/Questionnaires | 1 |
Education Level
Elementary Education | 3 |
Early Childhood Education | 1 |
Elementary Secondary Education | 1 |
Grade 3 | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
Higher Education | 1 |
Intermediate Grades | 1 |
More ▼ |
Audience
Researchers | 2 |
Location
Turkey | 2 |
United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Dynamic Indicators of Basic… | 1 |
Early Childhood Longitudinal… | 1 |
Measures of Academic Progress | 1 |
National Assessment of… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Kaitlyn G. Fitzgerald; Elizabeth Tipton – Journal of Educational and Behavioral Statistics, 2025
This article presents methods for using extant data to improve the properties of estimators of the standardized mean difference (SMD) effect size. Because samples recruited into education research studies are often more homogeneous than the populations of policy interest, the variation in educational outcomes can be smaller in these samples than…
Descriptors: Data Use, Computation, Effect Size, Meta Analysis
Michael Borenstein – Research Synthesis Methods, 2024
In any meta-analysis, it is critically important to report the dispersion in effects as well as the mean effect. If an intervention has a moderate clinical impact "on average" we also need to know if the impact is moderate for all relevant populations, or if it varies from trivial in some to major in others. Or indeed, if the…
Descriptors: Meta Analysis, Error Patterns, Statistical Analysis, Intervention
Kaitlyn G. Fitzgerald; Elizabeth Tipton – Grantee Submission, 2024
This article presents methods for using extant data to improve the properties of estimators of the standardized mean difference (SMD) effect size. Because samples recruited into education research studies are often more homogeneous than the populations of policy interest, the variation in educational outcomes can be smaller in these samples than…
Descriptors: Data Use, Computation, Effect Size, Meta Analysis
Peter M. Steiner; Patrick Sheehan; Vivian C. Wong – Grantee Submission, 2023
Given recent evidence challenging the replicability of results in the social and behavioral sciences, critical questions have been raised about appropriate measures for determining replication success in comparing effect estimates across studies. At issue is the fact that conclusions about replication success often depend on the measure used for…
Descriptors: Replication (Evaluation), Measurement Techniques, Statistical Analysis, Effect Size
Luke Keele; Matthew Lenard; Lindsay Page – Journal of Research on Educational Effectiveness, 2024
In education settings, treatments are often non-randomly assigned to clusters, such as schools or classrooms, while outcomes are measured for students. This research design is called the clustered observational study (COS). We examine the consequences of common support violations in the COS context. Common support violations occur when the…
Descriptors: Intervention, Cluster Grouping, Observation, Catholic Schools
Terry A. Beehr; Minseo Kim; Ian W. Armstrong – International Journal of Social Research Methodology, 2024
Previous research extensively studied reasons for and ways to avoid low response rates, but it largely ignored the primary research issue of the degree to which response rates matter, which we address. Methodological survey research on response rates has been concerned with how to increase responsiveness and with the effects of response rates on…
Descriptors: Surveys, Response Rates (Questionnaires), Effect Size, Research Methodology
Cox, Kyle; Kelcey, Benjamin – American Journal of Evaluation, 2023
Analysis of the differential treatment effects across targeted subgroups and contexts is a critical objective in many evaluations because it delineates for whom and under what conditions particular programs, therapies or treatments are effective. Unfortunately, it is unclear how to plan efficient and effective evaluations that include these…
Descriptors: Statistical Analysis, Research Design, Cluster Grouping, Sample Size
Bulus, Metin – Journal of Research on Educational Effectiveness, 2022
Although Cattaneo et al. (2019) provided a data-driven framework for power computations for Regression Discontinuity Designs in line with rdrobust Stata and R commands, which allows higher-order functional forms for the score variable when using the non-parametric local polynomial estimation, analogous advancements in their parametric estimation…
Descriptors: Effect Size, Computation, Regression (Statistics), Statistical Analysis
Joo, Seang-Hwane; Wang, Yan; Ferron, John; Beretvas, S. Natasha; Moeyaert, Mariola; Van Den Noortgate, Wim – Journal of Educational and Behavioral Statistics, 2022
Multiple baseline (MB) designs are becoming more prevalent in educational and behavioral research, and as they do, there is growing interest in combining effect size estimates across studies. To further refine the meta-analytic methods of estimating the effect, this study developed and compared eight alternative methods of estimating intervention…
Descriptors: Meta Analysis, Effect Size, Computation, Statistical Analysis
Karun Adusumilli; Francesco Agostinelli; Emilio Borghesan – National Bureau of Economic Research, 2024
This paper examines the scalability of the results from the Tennessee Student-Teacher Achievement Ratio (STAR) Project, a prominent educational experiment. We explore how the misalignment between the experimental design and the econometric model affects researchers' ability to learn about the intervention's scalability. We document heterogeneity…
Descriptors: Class Size, Research Design, Educational Research, Program Effectiveness
Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook; Yanli Xie; Dung Pham; Peilin Qiu; Ning Sui – Grantee Submission, 2024
Multisite trials that randomize individuals (e.g., students) within sites (e.g., schools) or clusters (e.g., teachers/classrooms) within sites (e.g., schools) are commonly used for program evaluation because they provide opportunities to learn about treatment effects as well as their heterogeneity across sites and subgroups (defined by moderating…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Educational Research, Effect Size
Justin Boutilier; Jonas Jonasson; Hannah Li; Erez Yoeli – Society for Research on Educational Effectiveness, 2024
Background: Randomized controlled trials (RCTs), or experiments, are the gold standard for intervention evaluation. However, the main appeal of RCTs--the clean identification of causal effects--can be compromised by interference, when one subject's actions can influence another subject's behavior or outcomes. In this paper, we formalize and study…
Descriptors: Randomized Controlled Trials, Intervention, Mathematical Models, Interference (Learning)
Menglin Xu; Jessica A. R. Logan – Educational and Psychological Measurement, 2024
Research designs that include planned missing data are gaining popularity in applied education research. These methods have traditionally relied on introducing missingness into data collections using the missing completely at random (MCAR) mechanism. This study assesses whether planned missingness can also be implemented when data are instead…
Descriptors: Research Design, Research Methodology, Monte Carlo Methods, Statistical Analysis
McKay, Brad; Corson, Abbey; Vinh, Mary-Anne; Jeyarajan, Gianna; Tandon, Chitrini; Brooks, Hugh; Hubley, Julie; Carter, Michael J. – Journal of Motor Learning and Development, 2023
A priori power analyses can ensure studies are unlikely to miss interesting effects. Recent metascience has suggested that kinesiology research may be underpowered and selectively reported. Here, we examined whether power analyses are being used to ensure informative studies in motor behavior. We reviewed every article published in three motor…
Descriptors: Incidence, Statistical Analysis, Psychomotor Skills, Motor Development
Weese, James D.; Turner, Ronna C.; Liang, Xinya; Ames, Allison; Crawford, Brandon – Educational and Psychological Measurement, 2023
A study was conducted to implement the use of a standardized effect size and corresponding classification guidelines for polytomous data with the POLYSIBTEST procedure and compare those guidelines with prior recommendations. Two simulation studies were included. The first identifies new unstandardized test heuristics for classifying moderate and…
Descriptors: Effect Size, Classification, Guidelines, Statistical Analysis