NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Elementary and Secondary…4
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 37 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zid Mancenido – Review of Educational Research, 2024
Many teacher education researchers have expressed concerns about the lack of rigorous impact evaluations of teacher preparation practices. I summarize these various concerns as they relate to issues of internal validity, measurement, and external validity. I then assess the prevalence of these issues by reviewing 166 impact evaluations of teacher…
Descriptors: Teacher Education, Educational Research, Program Evaluation, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Huey T. Chen; Liliana Morosanu; Victor H. Chen – Asia Pacific Journal of Education, 2024
The Campbellian validity typology has been used as a foundation for outcome evaluation and for developing evidence-based interventions for decades. As such, randomized control trials were preferred for outcome evaluation. However, some evaluators disagree with the validity typology's argument that randomized controlled trials as the best design…
Descriptors: Evaluation Methods, Systems Approach, Intervention, Evidence Based Practice
Hedges, Larry V.; Schauer, Jacob M. – Journal of Educational and Behavioral Statistics, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Hedges, Larry V.; Schauer, Jacob M. – Grantee Submission, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Steenbergen-Hu, Saiying; Olszewski-Kubilius, Paula – Journal of Advanced Academics, 2016
The article by Davis, Engberg, Epple, Sieg, and Zimmer (2010) represents one of the recent research efforts from economists in evaluating the impact of gifted programs. It can serve as a worked example of the implementation of the regression discontinuity (RD) design method in gifted education research. In this commentary, we first illustrate the…
Descriptors: Special Education, Gifted, Identification, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Sørlie, Mari-Anne; Ogden, Terje – International Journal of School & Educational Psychology, 2014
This paper reviews literature on the rationale, challenges, and recommendations for choosing a nonequivalent comparison (NEC) group design when evaluating intervention effects. After reviewing frequently addressed threats to validity, the paper describes recommendations for strengthening the research design and how the recommendations were…
Descriptors: Validity, Research Design, Experiments, Prevention
DiNardo, John; Lee, David S. – National Bureau of Economic Research, 2010
This chapter provides a selective review of some contemporary approaches to program evaluation. One motivation for our review is the recent emergence and increasing use of a particular kind of "program" in applied microeconomic research, the so-called Regression Discontinuity (RD) Design of Thistlethwaite and Campbell (1960). We organize our…
Descriptors: Research Design, Program Evaluation, Validity, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Astor, Ron Avi; Guerra, Nancy; Van Acker, Richard – Educational Researcher, 2010
The authors of this article consider how education researchers can improve school violence and school safety research by (a) examining gaps in theoretical, conceptual, and basic research on the phenomena of school violence; (b) reviewing key issues in the design and evaluation of evidence-based practices to prevent school violence; and (c)…
Descriptors: Violence, School Safety, Educational Research, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Fudge, Daniel L.; Skinner, Christopher H.; Williams, Jacqueline L.; Cowden, Dan; Clark, Janice; Bliss, Stacy L. – Journal of School Psychology, 2008
A single-case (B-C-B-C) experimental design was used to evaluate the effects of the Color Wheel classroom management system (CWS) on on-task (OT) behavior in an intact, general-education, 2nd-grade classroom during transitions. The CWS included three sets of rules, posted cues to indicate the rules students are expected to be following at that…
Descriptors: Classroom Techniques, Research Design, Cues, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
de Anda, Diane – Children & Schools, 2007
This article discusses the difficulties in conducting intervention research or evaluating intervention programs in a school setting. In particular, the problems associated with randomization and obtaining control groups are examined. The use of quasi-experimental designs, specifically a paired comparison design using the individual as his or her…
Descriptors: Program Evaluation, Intervention, Research Design, Control Groups
Peer reviewed Peer reviewed
Lindvall, C. Mauritz; Nitko, Anthony J. – Educational Evaluation and Policy Analysis, 1981
A design for evaluation studies of educational programs should provide valid and defensible inferences. Goals of evaluation are the identity of major components of inferences and specific validity concerns. Design problems may be resolved by creatively utilizing features of specific evaluations in designing unique conditions that permit valid…
Descriptors: Educational Assessment, Program Evaluation, Research Design, Research Methodology
Peer reviewed Peer reviewed
Yeaton, William; Sechrest, Lee – New Directions for Program Evaluation, 1987
In no-difference research, no differences are found among groups or conditions. This article summarizes the existing commentary on such research. The characteristics of no-difference research, its acceptance by the research community, strategies for conducting such studies, and its centrality within the experimental and nonexperimental paradigms…
Descriptors: Evaluation Methods, Literature Reviews, Models, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Barnette, J. Jackson; Wallis, Anne Baber – American Journal of Evaluation, 2005
We rely a great deal on the schematic descriptions that represent experimental and quasi-experimental design arrangements, as well as the discussions of threats to validity associated with these, provided by Campbell and his associates: Stanley, Cook, and Shadish. Some of these designs include descriptions of treatments removed, removed and then…
Descriptors: Intervention, Validity, Quasiexperimental Design, Evaluation Methods
Peer reviewed Peer reviewed
Horn, Wade F. – Evaluation Review, 1982
In an overview of single-case methodology, the potential utility of A-B-A and multiple baseline designs for evaluating social programs is discussed. Validity factors and cost-effectiveness are considered, showing that these designs are viable alternative methods where traditional randomized group designs are infeasible. (Author/CM)
Descriptors: Case Studies, Cost Effectiveness, Multivariate Analysis, Program Evaluation
Previous Page | Next Page »
Pages: 1  |  2  |  3