Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 24 |
Descriptor
Benchmarking | 24 |
Research Design | 24 |
Research Methodology | 8 |
Statistical Analysis | 8 |
Effect Size | 7 |
Evaluation Methods | 7 |
Intervention | 6 |
Guidelines | 5 |
Measurement Techniques | 5 |
Case Studies | 4 |
Educational Research | 4 |
More ▼ |
Source
Author
Publication Type
Education Level
Audience
Administrators | 1 |
Community | 1 |
Media Staff | 1 |
Parents | 1 |
Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Clintin P. Davis-Stober; Jason Dana; David Kellen; Sara D. McMullin; Wes Bonifay – Grantee Submission, 2023
Conducting research with human subjects can be difficult because of limited sample sizes and small empirical effects. We demonstrate that this problem can yield patterns of results that are practically indistinguishable from flipping a coin to determine the direction of treatment effects. We use this idea of random conclusions to establish a…
Descriptors: Research Methodology, Sample Size, Effect Size, Hypothesis Testing
Jason C. Chow; Jennifer R. Ledford; Sienna Windsor; Paige Bennett – Exceptional Children, 2023
The purpose of this study is to present a set of empirically derived effect size distributions in order to provide field-based benchmarks for assessing the relative effects of interventions aimed at reducing challenging behavior or increasing engagement for young children with and without disabilities. We synthesized 192 single-case designs that…
Descriptors: Behavior Problems, Intervention, Prediction, Learner Engagement
Kraft, Matthew A. – Annenberg Institute for School Reform at Brown University, 2019
Researchers commonly interpret effect sizes by applying benchmarks proposed by Cohen over a half century ago. However, effects that are small by Cohen's standards are large relative to the impacts of most field-based interventions. These benchmarks also fail to consider important differences in study features, program costs, and scalability. In…
Descriptors: Data Interpretation, Effect Size, Intervention, Benchmarking
Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P. – Journal of Research on Educational Effectiveness, 2016
Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…
Descriptors: Educational Research, Generalization, Sampling, Participant Characteristics
Gierut, Judith A.; Morrisette, Michele L.; Dickinson, Stephanie L. – Journal of Speech, Language, and Hearing Research, 2015
Purpose: The purpose of this study was to document, validate, and corroborate effect size (ES) for single-subject design in treatment of children with functional phonological disorders; to evaluate potential child-specific contributing variables relative to ES; and to establish benchmarks for interpretation of ES for the population. Method: Data…
Descriptors: Effect Size, Research Design, Phonology, Speech Therapy
McNeish, Daniel – Review of Educational Research, 2017
In education research, small samples are common because of financial limitations, logistical challenges, or exploratory studies. With small samples, statistical principles on which researchers rely do not hold, leading to trust issues with model estimates and possible replication issues when scaling up. Researchers are generally aware of such…
Descriptors: Models, Statistical Analysis, Sampling, Sample Size
Tang, Yang; Cook, Thomas D.; Kisbu-Sakarya, Yasemin – Society for Research on Educational Effectiveness, 2015
Regression discontinuity design (RD) has been widely used to produce reliable causal estimates. Researchers have validated the accuracy of RD design using within study comparisons (Cook, Shadish & Wong, 2008; Cook & Steiner, 2010; Shadish et al, 2011). Within study comparisons examines the validity of a quasi-experiment by comparing its…
Descriptors: Pretests Posttests, Statistical Bias, Accuracy, Regression (Statistics)
Moeller, Jeremy D.; Dattilo, John; Rusch, Frank – Psychology in the Schools, 2015
This study examined how specific guidelines and heuristics have been used to identify methodological rigor associated with single-case research designs based on quality indicators developed by Horner et al. Specifically, this article describes how literature reviews have applied Horner et al.'s quality indicators and evidence-based criteria.…
Descriptors: Research Design, Special Education, Literature Reviews, Educational Indicators
DeBarger, Angela Haydel; Penuel, William R.; Harris, Christopher J.; Kennedy, Cathleen A. – American Journal of Evaluation, 2016
Evaluators must employ research designs that generate compelling evidence related to the worth or value of programs, of which assessment data often play a critical role. This article focuses on assessment design in the context of evaluation. It describes the process of using the Framework for K-12 Science Education and Next Generation Science…
Descriptors: Intervention, Program Evaluation, Research Design, Science Tests
Magrath, Bronwen – Comparative Education Review, 2015
This article explores transnational activism within Education for All (EFA), looking specifically at the strategic use of information and research by transnational advocacy organizations. Through a comparative case-study examination of two prominent civil society organizations within the EFA movement--the Asia South Pacific Association for Basic…
Descriptors: Access to Education, Politics of Education, Activism, Advocacy
Mueller, Christoph Emanuel; Gaus, Hansjoerg – American Journal of Evaluation, 2015
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
Descriptors: Intervention, Research Design, Research Methodology, Program Evaluation
Fallon, Lindsay M.; Collier-Meek, Melissa A.; Maggin, Daniel M.; Sanetti, Lisa M. H.; Johnson, Austin H. – Exceptional Children, 2015
Optimal levels of treatment fidelity, a critical moderator of intervention effectiveness, are often difficult to sustain in applied settings. It is unknown whether performance feedback, a widely researched method for increasing educators' treatment fidelity, is an evidence-based practice. The purpose of this review was to evaluate the current…
Descriptors: Feedback (Response), Evidence, Literature Reviews, Intervention
Wing, Coady; Cook, Thomas D. – Journal of Policy Analysis and Management, 2013
The sharp regression discontinuity design (RDD) has three key weaknesses compared to the randomized clinical trial (RCT). It has lower statistical power, it is more dependent on statistical modeling assumptions, and its treatment effect estimates are limited to the narrow subpopulation of cases immediately around the cutoff, which is rarely of…
Descriptors: Regression (Statistics), Research Design, Statistical Analysis, Research Problems
Losinski, Mickey; Maag, John W.; Katsiyannis, Antonis; Ennis, Robin Parks – Exceptional Children, 2014
Interventions based on the results of functional behavioral assessment (FBA) have been the topic of extensive research and, in certain cases, mandated for students with disabilities under the Individuals With Disabilities Education Act. There exist a wide variety of methods for conducting such assessments, with little consensus in the field. The…
Descriptors: Intervention, Predictor Variables, Program Effectiveness, Educational Quality
McLaughlin, Gerald; Howard, Richard; McLaughlin, Josetta – Association for Institutional Research (NJ1), 2011
Institutional performance benchmarking requires identifying a set of reference or comparator institutions. This paper describes a method by which an institution can identify other institutions that are most similar to itself using a methodology that identifies the nearest institutional neighbors based on a balanced set of metrics accessed from…
Descriptors: Higher Education, Institutional Characteristics, Institutional Research, Institutional Evaluation
Previous Page | Next Page »
Pages: 1 | 2