Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 21 |
Descriptor
Meta Analysis | 27 |
Program Evaluation | 27 |
Research Design | 27 |
Program Effectiveness | 15 |
Research Methodology | 13 |
Effect Size | 12 |
Educational Research | 9 |
Evaluation Methods | 7 |
Intervention | 7 |
Sample Size | 5 |
Elementary Secondary Education | 4 |
More ▼ |
Source
Author
Slavin, Robert E. | 3 |
Cheung, Alan C. K. | 2 |
Hedges, Larry V. | 2 |
Schauer, Jacob M. | 2 |
Slavin, Robert | 2 |
Valentine, Jeffrey C. | 2 |
Alias, Norlidah | 1 |
Banister, Aaron | 1 |
Bremer, Christine D. | 1 |
Cason, Dana | 1 |
Castellano, Marisa | 1 |
More ▼ |
Publication Type
Journal Articles | 20 |
Reports - Research | 12 |
Reports - Evaluative | 8 |
Information Analyses | 6 |
Reports - Descriptive | 5 |
Opinion Papers | 2 |
Guides - Non-Classroom | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Policymakers | 1 |
Researchers | 1 |
Location
Tennessee | 1 |
United Kingdom | 1 |
United States | 1 |
Laws, Policies, & Programs
Individuals with Disabilities… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Shepley, Collin; Grisham-Brown, Jennifer; Lane, Justin D. – Topics in Early Childhood Special Education, 2022
Multitiered systems of support provide a framework for matching the needs of a struggling student with an appropriate intervention. Experimental evaluations of tiered support systems in grade schools have been conducted for decades but have been less frequently examined in early childhood contexts. A recent meta-analysis of multitiered systems of…
Descriptors: Positive Behavior Supports, Preschool Children, Preschool Education, Program Evaluation
Wolf, Rebecca; Morrison, Jennifer; Inns, Amanda; Slavin, Robert; Risman, Kelsey – Journal of Research on Educational Effectiveness, 2020
Rigorous evidence of program effectiveness has become increasingly important with the 2015 passage of the Every Student Succeeds Act (ESSA). One question that has not yet been fully explored is whether program evaluations carried out or commissioned by developers produce larger effect sizes than evaluations conducted by independent third parties.…
Descriptors: Program Evaluation, Program Effectiveness, Effect Size, Sample Size
Hedges, Larry V.; Schauer, Jacob M. – Journal of Educational and Behavioral Statistics, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Hedges, Larry V.; Schauer, Jacob M. – Grantee Submission, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Kulik, James A.; Fletcher, J. D. – Review of Educational Research, 2016
This review describes a meta-analysis of findings from 50 controlled evaluations of intelligent computer tutoring systems. The median effect of intelligent tutoring in the 50 evaluations was to raise test scores 0.66 standard deviations over conventional levels, or from the 50th to the 75th percentile. However, the amount of improvement found in…
Descriptors: Intelligent Tutoring Systems, Meta Analysis, Computer Assisted Instruction, Statistical Analysis
Reeves, Barnaby C.; Higgins, Julian P. T.; Ramsay, Craig; Shea, Beverley; Tugwell, Peter; Wells, George A. – Research Synthesis Methods, 2013
Background: Methods need to be further developed to include non-randomised studies (NRS) in systematic reviews of the effects of health care interventions. NRS are often required to answer questions about harms and interventions for which evidence from randomised controlled trials (RCTs) is not available. Methods used to review randomised…
Descriptors: Research Methodology, Research Design, Health Services, Workshops
Valentine, Jeffrey C.; Thompson, Simon G. – Research Synthesis Methods, 2013
Background: Confounding caused by selection bias is often a key difference between non-randomized studies (NRS) and randomized controlled trials (RCTs) of interventions. Key methodological issues: In this third paper of the series, we consider issues relating to the inclusion of NRS in systematic reviews on the effects of interventions. We discuss…
Descriptors: Research Design, Randomized Controlled Trials, Intervention, Bias
Shager, Hilary M.; Schindler, Holly S.; Magnuson, Katherine A.; Duncan, Greg J.; Yoshikawa, Hirokazu; Hart, Cassandra M. D. – Educational Evaluation and Policy Analysis, 2013
This study explores the extent to which differences in research design explain variation in Head Start program impacts. We employ meta-analytic techniques to predict effect sizes for cognitive and achievement outcomes as a function of the type and rigor of research design, quality and type of outcome measure, activity level of control group, and…
Descriptors: Meta Analysis, Preschool Education, Disadvantaged Youth, Outcome Measures
Jamaludin, Khairul Azhar; Alias, Norlidah; DeWitt, Dorothy – Turkish Online Journal of Educational Technology - TOJET, 2015
The practice of homeschooling still receives contrasting responses on its relevancy and effectiveness. The current study is aimed to map the trends in the selected eleven studies from various educational journals. The analysis focuses on mapping the trends on: a) research settings, b) target sample, c) method or instrument used, d) common focus or…
Descriptors: Journal Articles, Home Schooling, Educational Practices, Educational Research
Kanu, Mohamed; Hepler, Nancy; Labi, Halima – Journal of Child & Adolescent Substance Abuse, 2015
Background: Since 1984, Students Taking a Right Stand (STARS) Nashville has implemented Student Assistance Programs (SAPs) in the middle Tennessee area, to include 14 counties and 16 school districts. STARS Nashville serves K-12 with a focus in middle and high schools. Methods: The current study reviewed studies that utilized quasi-experimental…
Descriptors: Quasiexperimental Design, Middle Schools, High Schools, Meta Analysis
Cheung, Alan; Slavin, Robert – Society for Research on Educational Effectiveness, 2016
As evidence-based reform becomes increasingly important in educational policy, it is becoming essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational programs. The purpose of this study was to examine how methodological features such as types of publication, sample sizes, and…
Descriptors: Effect Size, Evidence Based Practice, Educational Change, Educational Policy
Institute of Education Sciences, 2013
In January 2011, a Joint Committee of representatives from the U.S. Department of Education (ED) and the U.S. National Science Foundation (NSF) began work to establish cross-agency guidelines for improving the quality, coherence, and pace of knowledge development in science, technology, engineering and mathematics (STEM) education. Although the…
Descriptors: STEM Education, Research and Development, Intervention, Educational Improvement
Cheung, Alan C. K.; Slavin, Robert E. – Center for Research and Reform in Education, 2012
This review examines the effectiveness of educational technology applications in improving the reading achievement of struggling readers in elementary schools. The review applies consistent inclusion standards to focus on studies that met high methodological standards. A total of 20 studies based on about 7,000 students in grades K-6 were included…
Descriptors: Reading Achievement, Educational Technology, Reading Difficulties, Reading Programs
Stufflebeam, Daniel L. – Journal of MultiDisciplinary Evaluation, 2011
Good evaluation requires that evaluation efforts themselves be evaluated. Many things can and often do go wrong in evaluation work. Accordingly, it is necessary to check evaluations for problems such as bias, technical error, administrative difficulties, and misuse. Such checks are needed both to improve ongoing evaluation activities and to assess…
Descriptors: Program Evaluation, Evaluation Criteria, Evaluation Methods, Definitions
Tucker, Mary L.; Gullekson, Nicole L.; McCambridge, Jim – Research in Higher Education Journal, 2011
Business students are increasingly seeking international experience in short-term, study abroad programs to enhance their intercultural knowledge, intercultural communication skills, and global perspectives to be more competitive in the global arena. Intuitively, universities initiating these programs and the students sojourning abroad believe in…
Descriptors: Study Abroad, Program Effectiveness, Program Evaluation, Achievement Gains
Previous Page | Next Page ยป
Pages: 1 | 2