NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Media Staff1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Clintin P. Davis-Stober; Jason Dana; David Kellen; Sara D. McMullin; Wes Bonifay – Grantee Submission, 2023
Conducting research with human subjects can be difficult because of limited sample sizes and small empirical effects. We demonstrate that this problem can yield patterns of results that are practically indistinguishable from flipping a coin to determine the direction of treatment effects. We use this idea of random conclusions to establish a…
Descriptors: Research Methodology, Sample Size, Effect Size, Hypothesis Testing
Jason C. Chow; Jennifer R. Ledford; Sienna Windsor; Paige Bennett – Exceptional Children, 2023
The purpose of this study is to present a set of empirically derived effect size distributions in order to provide field-based benchmarks for assessing the relative effects of interventions aimed at reducing challenging behavior or increasing engagement for young children with and without disabilities. We synthesized 192 single-case designs that…
Descriptors: Behavior Problems, Intervention, Prediction, Learner Engagement
Peer reviewed Peer reviewed
Direct linkDirect link
Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P. – Journal of Research on Educational Effectiveness, 2016
Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…
Descriptors: Educational Research, Generalization, Sampling, Participant Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Gierut, Judith A.; Morrisette, Michele L.; Dickinson, Stephanie L. – Journal of Speech, Language, and Hearing Research, 2015
Purpose: The purpose of this study was to document, validate, and corroborate effect size (ES) for single­-subject design in treatment of children with functional phonological disorders; to evaluate potential child-­specific contributing variables relative to ES; and to establish benchmarks for interpretation of ES for the population. Method: Data…
Descriptors: Effect Size, Research Design, Phonology, Speech Therapy
Peer reviewed Peer reviewed
Direct linkDirect link
McNeish, Daniel – Review of Educational Research, 2017
In education research, small samples are common because of financial limitations, logistical challenges, or exploratory studies. With small samples, statistical principles on which researchers rely do not hold, leading to trust issues with model estimates and possible replication issues when scaling up. Researchers are generally aware of such…
Descriptors: Models, Statistical Analysis, Sampling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Moeller, Jeremy D.; Dattilo, John; Rusch, Frank – Psychology in the Schools, 2015
This study examined how specific guidelines and heuristics have been used to identify methodological rigor associated with single-case research designs based on quality indicators developed by Horner et al. Specifically, this article describes how literature reviews have applied Horner et al.'s quality indicators and evidence-based criteria.…
Descriptors: Research Design, Special Education, Literature Reviews, Educational Indicators
Peer reviewed Peer reviewed
Direct linkDirect link
DeBarger, Angela Haydel; Penuel, William R.; Harris, Christopher J.; Kennedy, Cathleen A. – American Journal of Evaluation, 2016
Evaluators must employ research designs that generate compelling evidence related to the worth or value of programs, of which assessment data often play a critical role. This article focuses on assessment design in the context of evaluation. It describes the process of using the Framework for K-12 Science Education and Next Generation Science…
Descriptors: Intervention, Program Evaluation, Research Design, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Magrath, Bronwen – Comparative Education Review, 2015
This article explores transnational activism within Education for All (EFA), looking specifically at the strategic use of information and research by transnational advocacy organizations. Through a comparative case-study examination of two prominent civil society organizations within the EFA movement--the Asia South Pacific Association for Basic…
Descriptors: Access to Education, Politics of Education, Activism, Advocacy
Peer reviewed Peer reviewed
Direct linkDirect link
Mueller, Christoph Emanuel; Gaus, Hansjoerg – American Journal of Evaluation, 2015
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
Descriptors: Intervention, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Fallon, Lindsay M.; Collier-Meek, Melissa A.; Maggin, Daniel M.; Sanetti, Lisa M. H.; Johnson, Austin H. – Exceptional Children, 2015
Optimal levels of treatment fidelity, a critical moderator of intervention effectiveness, are often difficult to sustain in applied settings. It is unknown whether performance feedback, a widely researched method for increasing educators' treatment fidelity, is an evidence-based practice. The purpose of this review was to evaluate the current…
Descriptors: Feedback (Response), Evidence, Literature Reviews, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Cook, Thomas D. – Journal of Policy Analysis and Management, 2013
The sharp regression discontinuity design (RDD) has three key weaknesses compared to the randomized clinical trial (RCT). It has lower statistical power, it is more dependent on statistical modeling assumptions, and its treatment effect estimates are limited to the narrow subpopulation of cases immediately around the cutoff, which is rarely of…
Descriptors: Regression (Statistics), Research Design, Statistical Analysis, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Losinski, Mickey; Maag, John W.; Katsiyannis, Antonis; Ennis, Robin Parks – Exceptional Children, 2014
Interventions based on the results of functional behavioral assessment (FBA) have been the topic of extensive research and, in certain cases, mandated for students with disabilities under the Individuals With Disabilities Education Act. There exist a wide variety of methods for conducting such assessments, with little consensus in the field. The…
Descriptors: Intervention, Predictor Variables, Program Effectiveness, Educational Quality
Peer reviewed Peer reviewed
Direct linkDirect link
Metz, Thaddeus – Theory and Research in Education, 2011
Concomitant with the rise of rationalizing accountability in higher education has been an increase in theoretical reflection about the forms accountability has taken and the ones it should take. The literature is now peppered by a wide array of distinctions (e.g. internal/external, inward/outward, vertical/horizontal, upward/downward,…
Descriptors: Higher Education, Accountability, Accounting, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Volkwein, J. Fredericks – New Directions for Institutional Research, 2010
In this chapter, the author proposes a model for assessing institutional effectiveness. The Volkwein model for assessing institutional effectiveness consists of five parts that summarize the steps for assessing institutions, programs, faculty, and students. The first step in the model distinguishes the dual purposes of institutional effectiveness:…
Descriptors: Institutional Evaluation, Models, Evaluation Methods, Evaluation Criteria
Peer reviewed Peer reviewed
Spokane, Arnold R.; Meir, Elchanan I.; Catalano, Michele – Journal of Vocational Behavior, 2000
Examination of 66 congruence studies, including benchmarking studies with improved methodology, showed that congruence is a sufficient though not a necessary condition for job satisfaction. A paradigmatic shift is needed with diverse research designs and methods, emphasizing experiment and drawing on person-environment psychology. (Contains 142…
Descriptors: Benchmarking, Congruence (Psychology), Job Satisfaction, Meta Analysis
Previous Page | Next Page »
Pages: 1  |  2