NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Germany1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Jason C. Chow; Jennifer R. Ledford; Sienna Windsor; Paige Bennett – Exceptional Children, 2023
The purpose of this study is to present a set of empirically derived effect size distributions in order to provide field-based benchmarks for assessing the relative effects of interventions aimed at reducing challenging behavior or increasing engagement for young children with and without disabilities. We synthesized 192 single-case designs that…
Descriptors: Behavior Problems, Intervention, Prediction, Learner Engagement
Peer reviewed Peer reviewed
Direct linkDirect link
Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P. – Journal of Research on Educational Effectiveness, 2016
Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…
Descriptors: Educational Research, Generalization, Sampling, Participant Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Gierut, Judith A.; Morrisette, Michele L.; Dickinson, Stephanie L. – Journal of Speech, Language, and Hearing Research, 2015
Purpose: The purpose of this study was to document, validate, and corroborate effect size (ES) for single­-subject design in treatment of children with functional phonological disorders; to evaluate potential child-­specific contributing variables relative to ES; and to establish benchmarks for interpretation of ES for the population. Method: Data…
Descriptors: Effect Size, Research Design, Phonology, Speech Therapy
Peer reviewed Peer reviewed
Direct linkDirect link
McNeish, Daniel – Review of Educational Research, 2017
In education research, small samples are common because of financial limitations, logistical challenges, or exploratory studies. With small samples, statistical principles on which researchers rely do not hold, leading to trust issues with model estimates and possible replication issues when scaling up. Researchers are generally aware of such…
Descriptors: Models, Statistical Analysis, Sampling, Sample Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tang, Yang; Cook, Thomas D.; Kisbu-Sakarya, Yasemin – Society for Research on Educational Effectiveness, 2015
Regression discontinuity design (RD) has been widely used to produce reliable causal estimates. Researchers have validated the accuracy of RD design using within study comparisons (Cook, Shadish & Wong, 2008; Cook & Steiner, 2010; Shadish et al, 2011). Within study comparisons examines the validity of a quasi-experiment by comparing its…
Descriptors: Pretests Posttests, Statistical Bias, Accuracy, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Moeller, Jeremy D.; Dattilo, John; Rusch, Frank – Psychology in the Schools, 2015
This study examined how specific guidelines and heuristics have been used to identify methodological rigor associated with single-case research designs based on quality indicators developed by Horner et al. Specifically, this article describes how literature reviews have applied Horner et al.'s quality indicators and evidence-based criteria.…
Descriptors: Research Design, Special Education, Literature Reviews, Educational Indicators
Peer reviewed Peer reviewed
Direct linkDirect link
DeBarger, Angela Haydel; Penuel, William R.; Harris, Christopher J.; Kennedy, Cathleen A. – American Journal of Evaluation, 2016
Evaluators must employ research designs that generate compelling evidence related to the worth or value of programs, of which assessment data often play a critical role. This article focuses on assessment design in the context of evaluation. It describes the process of using the Framework for K-12 Science Education and Next Generation Science…
Descriptors: Intervention, Program Evaluation, Research Design, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Magrath, Bronwen – Comparative Education Review, 2015
This article explores transnational activism within Education for All (EFA), looking specifically at the strategic use of information and research by transnational advocacy organizations. Through a comparative case-study examination of two prominent civil society organizations within the EFA movement--the Asia South Pacific Association for Basic…
Descriptors: Access to Education, Politics of Education, Activism, Advocacy
Peer reviewed Peer reviewed
Direct linkDirect link
Mueller, Christoph Emanuel; Gaus, Hansjoerg – American Journal of Evaluation, 2015
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
Descriptors: Intervention, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Fallon, Lindsay M.; Collier-Meek, Melissa A.; Maggin, Daniel M.; Sanetti, Lisa M. H.; Johnson, Austin H. – Exceptional Children, 2015
Optimal levels of treatment fidelity, a critical moderator of intervention effectiveness, are often difficult to sustain in applied settings. It is unknown whether performance feedback, a widely researched method for increasing educators' treatment fidelity, is an evidence-based practice. The purpose of this review was to evaluate the current…
Descriptors: Feedback (Response), Evidence, Literature Reviews, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Cook, Thomas D. – Journal of Policy Analysis and Management, 2013
The sharp regression discontinuity design (RDD) has three key weaknesses compared to the randomized clinical trial (RCT). It has lower statistical power, it is more dependent on statistical modeling assumptions, and its treatment effect estimates are limited to the narrow subpopulation of cases immediately around the cutoff, which is rarely of…
Descriptors: Regression (Statistics), Research Design, Statistical Analysis, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Losinski, Mickey; Maag, John W.; Katsiyannis, Antonis; Ennis, Robin Parks – Exceptional Children, 2014
Interventions based on the results of functional behavioral assessment (FBA) have been the topic of extensive research and, in certain cases, mandated for students with disabilities under the Individuals With Disabilities Education Act. There exist a wide variety of methods for conducting such assessments, with little consensus in the field. The…
Descriptors: Intervention, Predictor Variables, Program Effectiveness, Educational Quality
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Phelps, Geoffrey; Jones, Nathan; Kelcey, Ben; Liu, Shuangshuang; Kisa, Zahid – Society for Research on Educational Effectiveness, 2013
Growing interest in teaching quality and accountability has focused attention on the need for rigorous studies and evaluations of professional development (PD) programs. However, the study of PD has been hampered by a lack of suitable instruments. The authors present data from the Teacher Knowledge Assessment System (TKAS), which was designed to…
Descriptors: Benchmarking, Knowledge Base for Teaching, Effect Size, Professional Development
Peer reviewed Peer reviewed
PDF on ERIC Download full text
National Center for Education Statistics, 2013
The 2011 NAEP-TIMSS linking study conducted by the National Center for Education Statistics (NCES) was designed to predict Trends in International Mathematics and Science Study (TIMSS) scores for the U.S. states that participated in 2011 National Assessment of Educational Progress (NAEP) mathematics and science assessment of eighth-grade students.…
Descriptors: Grade 8, Research Methodology, Research Design, Trend Analysis
Yao, S. Bing; Hevner, Alan R. – 1984
Benchmarking is one of several alternate methods of performance evaluation, which is a key aspect in the selection of database systems. The purpose of this report is to provide a performance evaluation methodology, or benchmarking framework, to assist in the design and implementation of a wide variety of benchmark experiments. The methodology,…
Descriptors: Benchmarking, Database Management Systems, Databases, Evaluation Criteria