NotesFAQContact Us
Collection
Advanced
Search Tips
Location
Illinois1
Texas1
What Works Clearinghouse Rating
Showing 1 to 15 of 59 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Matthew J. Mayhew; Christa E. Winkler – Journal of Postsecondary Student Success, 2024
Higher education professionals often are tasked with providing evidence to stakeholders that programs, services, and practices implemented on their campuses contribute to student success. Furthermore, in the absence of a solid base of evidence related to effective practices, higher education researchers and practitioners are left questioning what…
Descriptors: Higher Education, Educational Practices, Evidence Based Practice, Program Evaluation
Hedges, Larry V.; Schauer, Jacob M. – Journal of Educational and Behavioral Statistics, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Hedges, Larry V.; Schauer, Jacob M. – Grantee Submission, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2018
Underlying all What Works Clearinghouse (WWC) products are WWC Study Review Guides, which are intended for use by WWC certified reviewers to assess studies against the WWC evidence standards. As part of an ongoing effort to increase transparency, promote collaboration, and encourage widespread use of the WWC standards, the Institute of Education…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2016
This document provides step-by-step instructions on how to complete the Study Review Guide (SRG, Version S3, V2) for single-case designs (SCDs). Reviewers will complete an SRG for every What Works Clearinghouse (WWC) review. A completed SRG should be a reviewer's independent assessment of the study, relative to the criteria specified in the review…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah – Journal of Research on Educational Effectiveness, 2014
Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…
Descriptors: Probability, Inferences, Eligibility, Recruitment
Deke, John; Dragoset, Lisa – Mathematica Policy Research, Inc., 2012
The regression discontinuity design (RDD) has the potential to yield findings with causal validity approaching that of the randomized controlled trial (RCT). However, Schochet (2008a) estimated that, on average, an RDD study of an education intervention would need to include three to four times as many schools or students as an RCT to produce…
Descriptors: Research Design, Elementary Secondary Education, Regression (Statistics), Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Harvill, Eleanor L.; Peck, Laura R.; Bell, Stephen H. – American Journal of Evaluation, 2013
Using exogenous characteristics to identify endogenous subgroups, the approach discussed in this method note creates symmetric subsets within treatment and control groups, allowing the analysis to take advantage of an experimental design. In order to maintain treatment--control symmetry, however, prior work has posited that it is necessary to use…
Descriptors: Experimental Groups, Control Groups, Research Design, Sampling
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bell, Stephen H.; Puma, Michael J.; Cook, Ronna J.; Heid, Camilla A. – Society for Research on Educational Effectiveness, 2013
Access to Head Start has been shown to improve children's preschool experiences and school readiness on selected factors through the end of 1st grade. Two more years of follow-up, through the end of 3rd grade, can now be examined to determine whether these effects continue into the middle elementary grades. The statistical design and impact…
Descriptors: Evaluation Methods, Data Analysis, Randomized Controlled Trials, Sampling
Spybrook, Jessaca; Lininger, Monica; Cullen, Anne – Society for Research on Educational Effectiveness, 2011
The purpose of this study is to extend the work of Spybrook and Raudenbush (2009) and examine how the research designs and sample sizes changed from the planning phase to the implementation phase in the first wave of studies funded by IES. The authors examine the impact of the changes in terms of the changes in the precision of the study from the…
Descriptors: Evaluation Criteria, Sampling, Research Design, Planning
Peer reviewed Peer reviewed
Direct linkDirect link
Dubois, Cathy; Long, Lori – International Journal on E-Learning, 2012
E-learning researchers face considerable challenges in creating meaningful and generalizable studies due to the complex nature of this dynamic training medium. Our experience in conducting workplace e-learning research led us to create this guide for planning research on e-learning. We share the unanticipated complications we encountered in our…
Descriptors: Electronic Learning, Course Content, Instructional Design, Program Implementation
Peer reviewed Peer reviewed
Direct linkDirect link
Ji, Peter; DuBois, David L.; Flay, Brian R.; Brechling, Vanessa – Journal of School Health, 2008
Background: Recruiting schools into a matched-pair randomized control trial (MP-RCT) to evaluate the efficacy of a school-level prevention program presents challenges for researchers. We considered which of 2 procedures would be most effective for recruiting schools into the study and assigning them to conditions. In 1 procedure (recruit and…
Descriptors: Control Groups, Prevention, Recruitment, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2008
This article examines theoretical and empirical issues related to the statistical power of impact estimates for experimental evaluations of education programs. The author considers designs where random assignment is conducted at the school, classroom, or student level, and employs a unified analytic framework using statistical methods from the…
Descriptors: Elementary School Students, Research Design, Standardized Tests, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Silvia, Suyapa; Blitstein, Jonathan; Williams, Jason; Ringwalt, Chris; Dusenbury, Linda; Hansen, William – National Center for Education Evaluation and Regional Assistance, 2010
This is the first of two reports that summarize the findings from an impact evaluation of a violence prevention intervention for middle schools. This report discusses findings after 1 year of implementation. A forthcoming report will discuss the findings after 2 years and 3 years of implementation. In 2004, the U.S. Department of Education (ED)…
Descriptors: Middle Schools, Violence, Prevention, Intervention
Goldman, Jerry – Evaluation Quarterly, 1977
This note suggests a solution to the problem of achieving randomization in experimental settings where units deemed eligible for treatment "trickle in," that is, appear at any time. The solution permits replication of the experiment in order to test for time-dependent effects. (Author/CTM)
Descriptors: Program Evaluation, Research Design, Research Problems, Sampling
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4