Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 5 |
Descriptor
Research Design | 5 |
Statistical Bias | 5 |
Comparative Analysis | 4 |
Evaluation Methods | 3 |
Program Evaluation | 3 |
Control Groups | 2 |
Foreign Countries | 2 |
Intervention | 2 |
Validity | 2 |
Accuracy | 1 |
Benchmarking | 1 |
More ▼ |
Source
American Journal of Evaluation | 5 |
Author
Publication Type
Journal Articles | 5 |
Reports - Descriptive | 2 |
Reports - Research | 2 |
Information Analyses | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly – American Journal of Evaluation, 2014
Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment…
Descriptors: Time, Evaluation Methods, Measurement Techniques, Research Design
Mueller, Christoph Emanuel; Gaus, Hansjoerg – American Journal of Evaluation, 2015
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
Descriptors: Intervention, Research Design, Research Methodology, Program Evaluation
Hansen, Henrik; Klejnstrup, Ninja Ritter; Andersen, Ole Winckler – American Journal of Evaluation, 2013
There is a long-standing debate as to whether nonexperimental estimators of causal effects of social programs can overcome selection bias. Most existing reviews either are inconclusive or point to significant selection biases in nonexperimental studies. However, many of the reviews, the so-called "between-studies," do not make direct…
Descriptors: Foreign Countries, Developing Nations, Outcome Measures, Comparative Analysis
House, Ernest R. – American Journal of Evaluation, 2008
Drug studies are often cited as the best exemplars of evaluation design. However, many of these studies are seriously biased in favor of positive findings for the drugs evaluated, even to the point where dangerous effects are hidden. In spite of using randomized designs and double blinding, drug companies have found ways of producing the results…
Descriptors: Integrity, Evaluation Methods, Program Evaluation, Experimenter Characteristics