Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 16 |
Since 2006 (last 20 years) | 52 |
Descriptor
Source
American Journal of Evaluation | 58 |
Author
Publication Type
Journal Articles | 58 |
Reports - Research | 24 |
Reports - Descriptive | 15 |
Reports - Evaluative | 14 |
Information Analyses | 4 |
Opinion Papers | 2 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 5 |
Adult Education | 4 |
Elementary Secondary Education | 3 |
Postsecondary Education | 3 |
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 10 | 1 |
Grade 11 | 1 |
Grade 5 | 1 |
Grade 8 | 1 |
High Schools | 1 |
More ▼ |
Audience
Location
Germany | 2 |
Maryland | 2 |
Arizona | 1 |
California | 1 |
France | 1 |
Indiana | 1 |
Mexico | 1 |
New Zealand | 1 |
Nicaragua | 1 |
Texas | 1 |
Tonga | 1 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 2 |
Social Security | 1 |
Assessments and Surveys
Bayley Mental Development… | 1 |
Bayley Scales of Infant… | 1 |
Early Childhood Environment… | 1 |
Early Childhood Longitudinal… | 1 |
What Works Clearinghouse Rating
Ahlin, Eileen M. – American Journal of Evaluation, 2015
Evaluation research conducted in agencies that sanction law violators is often challenging and due process may preclude evaluators from using experimental methods in traditional criminal justice agencies such as police, courts, and corrections. However, administrative agencies often deal with the same population but are not bound by due process…
Descriptors: Research Methodology, Evaluation Research, Criminals, Correctional Institutions
Le Menestrel, Suzanne M.; Walahoski, Jill S.; Mielke, Monica B. – American Journal of Evaluation, 2014
The 4-H youth development organization is a complex public--private partnership between the U.S. Department of Agriculture's National Institute of Food and Agriculture, the nation's Cooperative Extension system and National 4-H Council, a private, nonprofit partner. The current article is focused on a partnership approach to the…
Descriptors: Youth Programs, Evaluators, Cooperation, Evaluation Methods
Patton, Michael Quinn – American Journal of Evaluation, 2015
Our understanding of programs is enhanced when trained, skilled, and observant evaluators go "into the field"--the real world where programs are conducted--paying attention to what's going on, systematically documenting what they see, and reporting what they learn. The article opens by presenting and illustrating twelve reasons for…
Descriptors: Program Evaluation, Evaluation Methods, Design Requirements, Field Studies
Bell, Stephen H.; Peck, Laura R. – American Journal of Evaluation, 2013
To answer "what works?" questions about policy interventions based on an experimental design, Peck (2003) proposes to use baseline characteristics to symmetrically divide treatment and control group members into subgroups defined by endogenously determined postrandom assignment events. Symmetric prediction of these subgroups in both…
Descriptors: Program Effectiveness, Experimental Groups, Control Groups, Program Evaluation
Westine, Carl D. – American Journal of Evaluation, 2016
Little is known empirically about intraclass correlations (ICCs) for multisite cluster randomized trial (MSCRT) designs, particularly in science education. In this study, ICCs suitable for science achievement studies using a three-level (students in schools in districts) MSCRT design that block on district are estimated and examined. Estimates of…
Descriptors: Efficiency, Evaluation Methods, Science Achievement, Correlation
St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly – American Journal of Evaluation, 2014
Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment…
Descriptors: Time, Evaluation Methods, Measurement Techniques, Research Design
DeBarger, Angela Haydel; Penuel, William R.; Harris, Christopher J.; Kennedy, Cathleen A. – American Journal of Evaluation, 2016
Evaluators must employ research designs that generate compelling evidence related to the worth or value of programs, of which assessment data often play a critical role. This article focuses on assessment design in the context of evaluation. It describes the process of using the Framework for K-12 Science Education and Next Generation Science…
Descriptors: Intervention, Program Evaluation, Research Design, Science Tests
Mueller, Christoph Emanuel; Gaus, Hansjoerg – American Journal of Evaluation, 2015
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
Descriptors: Intervention, Research Design, Research Methodology, Program Evaluation
Hansen, Henrik; Klejnstrup, Ninja Ritter; Andersen, Ole Winckler – American Journal of Evaluation, 2013
There is a long-standing debate as to whether nonexperimental estimators of causal effects of social programs can overcome selection bias. Most existing reviews either are inconclusive or point to significant selection biases in nonexperimental studies. However, many of the reviews, the so-called "between-studies," do not make direct…
Descriptors: Foreign Countries, Developing Nations, Outcome Measures, Comparative Analysis
Ryan, Katherine E.; Gandha, Tysza; Culbertson, Michael J.; Carlson, Crystal – American Journal of Evaluation, 2014
In evaluation and applied social research, focus groups may be used to gather different kinds of evidence (e.g., opinion, tacit knowledge). In this article, we argue that making focus group design choices explicitly in relation to the type of evidence required would enhance the empirical value and rigor associated with focus group utilization. We…
Descriptors: Focus Groups, Research Methodology, Research Design, Educational Research
Does My Program Really Make a Difference? Program Evaluation Utilizing Aggregate Single-Subject Data
Burns, Catherine E. – American Journal of Evaluation, 2015
In the current climate of increasing fiscal and clinical accountability, information is required about overall program effectiveness using clinical data. These requests present a challenge for programs utilizing single-subject data due to the use of highly individualized behavior plans and behavioral monitoring. Subsequently, the diversity of the…
Descriptors: Program Evaluation, Program Effectiveness, Data Analysis, Research Design
Azzam, Tarek; Jacobson, Miriam R. – American Journal of Evaluation, 2013
This article explores the viability of online crowdsourcing for creating matched-comparison groups. This exploratory study compares survey results from a randomized control group to survey results from a matched-comparison group created from Amazon.com's MTurk crowdsourcing service to determine their comparability. Study findings indicate…
Descriptors: Matched Groups, Control Groups, Comparative Analysis, Evaluation
Dong, Nianbo – American Journal of Evaluation, 2015
Researchers have become increasingly interested in programs' main and interaction effects of two variables (A and B, e.g., two treatment variables or one treatment variable and one moderator) on outcomes. A challenge for estimating main and interaction effects is to eliminate selection bias across A-by-B groups. I introduce Rubin's causal model to…
Descriptors: Probability, Statistical Analysis, Research Design, Causal Models
Braverman, Marc T. – American Journal of Evaluation, 2013
Sound evaluation planning requires numerous decisions about how constructs in a program theory will be translated into measures and instruments that produce evaluation data. This article, the first in a dialogue exchange, examines how decisions about measurement are (and should be) made, especially in the context of small-scale local program…
Descriptors: Evaluation Methods, Methods Research, Research Methodology, Research Design
Harvill, Eleanor L.; Peck, Laura R.; Bell, Stephen H. – American Journal of Evaluation, 2013
Using exogenous characteristics to identify endogenous subgroups, the approach discussed in this method note creates symmetric subsets within treatment and control groups, allowing the analysis to take advantage of an experimental design. In order to maintain treatment--control symmetry, however, prior work has posited that it is necessary to use…
Descriptors: Experimental Groups, Control Groups, Research Design, Sampling