Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 5 |
Descriptor
Program Evaluation | 12 |
Reliability | 12 |
Research Design | 12 |
Validity | 6 |
Research Methodology | 5 |
Evaluation Methods | 4 |
Program Effectiveness | 4 |
Data Collection | 3 |
Effect Size | 3 |
Sampling | 3 |
Case Studies | 2 |
More ▼ |
Source
American Journal of Evaluation | 1 |
Education and Treatment of… | 1 |
Evaluation and Program… | 1 |
Grantee Submission | 1 |
Journal of Educational and… | 1 |
Journal of Human Resources | 1 |
New Directions for Evaluation | 1 |
Author
Hedges, Larry V. | 2 |
Schauer, Jacob M. | 2 |
Button, Scott B. | 1 |
Cox, Meredith | 1 |
Deniston, O. Lynn | 1 |
Diaz, Juan Jose | 1 |
Endo, George T. | 1 |
Feller, Irwin | 1 |
Gaus, Hansjoerg | 1 |
Granville, Arthur C. | 1 |
Guba, Egon G. | 1 |
More ▼ |
Publication Type
Reports - Research | 9 |
Journal Articles | 6 |
Reports - Descriptive | 2 |
Information Analyses | 1 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Elementary Education | 1 |
Grade 4 | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Germany | 1 |
Mexico | 1 |
United States | 1 |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Does not meet standards | 1 |
Hedges, Larry V.; Schauer, Jacob M. – Journal of Educational and Behavioral Statistics, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Hedges, Larry V.; Schauer, Jacob M. – Grantee Submission, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Mueller, Christoph Emanuel; Gaus, Hansjoerg – American Journal of Evaluation, 2015
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
Descriptors: Intervention, Research Design, Research Methodology, Program Evaluation
Oakes, Wendy Peia; Lane, Kathleen Lynne; Cox, Meredith; Magrane, Ashley; Jenkins, Abbie; Hankins, Katy – Education and Treatment of Children, 2012
We offer a methodological illustration for researchers and practitioners of how to conduct a development study consistent with the parameters delineated by the Institute of Education Sciences (IES; U.S. Department of Education [USDE], 2010) to explore the utility of an existing Tier 1 intervention applied as a Tier 2 support within a three-tiered…
Descriptors: Elementary School Students, Behavior Disorders, Special Education, Intervention
Diaz, Juan Jose; Handa, Sudhanshu – Journal of Human Resources, 2006
Not all policy questions can be addressed by social experiments. Nonexperimental evaluation methods provide an alternative to experimental designs but their results depend on untestable assumptions. This paper presents evidence on the reliability of propensity score matching (PSM), which estimates treatment effects under the assumption of…
Descriptors: Evaluation Methods, Research Design, Reliability, Program Evaluation

Mark, Melvin M.; Feller, Irwin; Button, Scott B. – New Directions for Evaluation, 1997
A review of qualitative methods used in a predominantly quantitative evaluation indicates a variety of roles for such a mixing of methods, including framing and revising research questions, assessing the validity of measures and adaptations to program implementation, and gauging the degree of uncertainty and generalizability of conclusions.…
Descriptors: Case Studies, Integrated Activities, Models, Program Evaluation
Guba, Egon G. – 1978
Evaluation is viewed as essential to decision making and social policy development. Since conventional methods have been disappointing or inadequate, naturalistic inquiry (N/I) differs from conventional science in minimizing constraints on antecedent conditions (controls) and on output (dependent variables). N/I is phenomenological rather than…
Descriptors: Credibility, Educational Assessment, Evaluation Criteria, Evaluation Methods

Strasser, Stephen; Deniston, O. Lynn – Evaluation and Program Planning, 1978
Factors involved in pre-planned and post-planned evaluation of program effectiveness are compared: (1) reliability and cost of data; (2) internal and external validity; (3) obtrusiveness and threat; (4) goal displacement and program direction. A model to help program administrators decide which approach is more appropriate is presented. (Author/MH)
Descriptors: Data Collection, Decision Making, Evaluation Criteria, Evaluation Methods
Mandeville, Garrett K. – 1978
The RMC Research Corporation evaluation model C1--the special regression model (SRM)--was evaluated through a series of computer simulations and compared with an alternative model, the norm referenced model (NRM). Using local data and national norm data to determine reasonable values for sample size and pretest posttest correlation parameters, the…
Descriptors: Analysis of Covariance, Error of Measurement, Intermediate Grades, Mathematical Models
Rothman, M. L.; And Others – 1982
A practical application of generalizability theory, demonstrating how the variance components contribute to understanding and interpreting the data collected to evaluate a program, is described. The evaluation concerned 120 learning modules developed for the Dental Auxiliary Education Project. The goals of the project were to design, implement,…
Descriptors: Correlation, Data Collection, Dental Schools, Educational Research
Granville, Arthur C.; And Others – 1976
This executive summary presents the major findings of Interim Report III, which reports preliminary evaluation of Project Developmental Continuity (PDC). A Head Start demonstration program, PDC is aimed at promoting greater educational and developmental continuity as children make the transition from preschool to school. The report addresses three…
Descriptors: Attrition (Research Studies), Data Collection, Demonstration Programs, Early Childhood Education
Sloane, Howard N.; Endo, George T. – 1982
The 3-year project developed self-instructional programs and evaluated parent use of these programs (approximately 185 families) to improve behavior problems of their handicapped children, aged 3 to 9. The project's format included five goals (e.g., determination of the degree to which parents can treat behavioral problems without professional…
Descriptors: Affective Measures, Autoinstructional Aids, Behavior Modification, Behavior Problems