Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 6 |
Descriptor
Benchmarking | 6 |
Research Design | 6 |
Educational Research | 4 |
Effect Size | 3 |
Intervention | 3 |
Statistical Analysis | 3 |
Computation | 2 |
Research Methodology | 2 |
Sample Size | 2 |
Sampling | 2 |
Simulation | 2 |
More ▼ |
Source
American Journal of Evaluation | 1 |
Annenberg Institute for… | 1 |
Exceptional Children | 1 |
Grantee Submission | 1 |
Journal of Research on… | 1 |
Review of Educational Research | 1 |
Author
Publication Type
Journal Articles | 5 |
Reports - Research | 4 |
Information Analyses | 2 |
Reports - Evaluative | 1 |
Education Level
Early Childhood Education | 1 |
Elementary Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Clintin P. Davis-Stober; Jason Dana; David Kellen; Sara D. McMullin; Wes Bonifay – Grantee Submission, 2023
Conducting research with human subjects can be difficult because of limited sample sizes and small empirical effects. We demonstrate that this problem can yield patterns of results that are practically indistinguishable from flipping a coin to determine the direction of treatment effects. We use this idea of random conclusions to establish a…
Descriptors: Research Methodology, Sample Size, Effect Size, Hypothesis Testing
Jason C. Chow; Jennifer R. Ledford; Sienna Windsor; Paige Bennett – Exceptional Children, 2023
The purpose of this study is to present a set of empirically derived effect size distributions in order to provide field-based benchmarks for assessing the relative effects of interventions aimed at reducing challenging behavior or increasing engagement for young children with and without disabilities. We synthesized 192 single-case designs that…
Descriptors: Behavior Problems, Intervention, Prediction, Learner Engagement
Kraft, Matthew A. – Annenberg Institute for School Reform at Brown University, 2019
Researchers commonly interpret effect sizes by applying benchmarks proposed by Cohen over a half century ago. However, effects that are small by Cohen's standards are large relative to the impacts of most field-based interventions. These benchmarks also fail to consider important differences in study features, program costs, and scalability. In…
Descriptors: Data Interpretation, Effect Size, Intervention, Benchmarking
Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P. – Journal of Research on Educational Effectiveness, 2016
Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…
Descriptors: Educational Research, Generalization, Sampling, Participant Characteristics
McNeish, Daniel – Review of Educational Research, 2017
In education research, small samples are common because of financial limitations, logistical challenges, or exploratory studies. With small samples, statistical principles on which researchers rely do not hold, leading to trust issues with model estimates and possible replication issues when scaling up. Researchers are generally aware of such…
Descriptors: Models, Statistical Analysis, Sampling, Sample Size
DeBarger, Angela Haydel; Penuel, William R.; Harris, Christopher J.; Kennedy, Cathleen A. – American Journal of Evaluation, 2016
Evaluators must employ research designs that generate compelling evidence related to the worth or value of programs, of which assessment data often play a critical role. This article focuses on assessment design in the context of evaluation. It describes the process of using the Framework for K-12 Science Education and Next Generation Science…
Descriptors: Intervention, Program Evaluation, Research Design, Science Tests