Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 9 |
Descriptor
Research Design | 52 |
Evaluation Methods | 33 |
Program Evaluation | 33 |
Research Methodology | 19 |
Research Problems | 12 |
Intervention | 9 |
Data Analysis | 8 |
Control Groups | 7 |
Models | 7 |
Foreign Countries | 6 |
Comparative Analysis | 4 |
More ▼ |
Source
Evaluation Review | 52 |
Author
Publication Type
Journal Articles | 52 |
Reports - Evaluative | 23 |
Reports - Research | 22 |
Reports - Descriptive | 5 |
Opinion Papers | 2 |
Speeches/Meeting Papers | 2 |
Guides - Non-Classroom | 1 |
Reports - General | 1 |
Education Level
Adult Education | 3 |
Early Childhood Education | 1 |
Higher Education | 1 |
Audience
Researchers | 1 |
Location
Delaware | 2 |
Arizona | 1 |
Australia | 1 |
Chile | 1 |
District of Columbia | 1 |
Germany | 1 |
Maryland | 1 |
New Zealand | 1 |
Ohio | 1 |
Tennessee | 1 |
United Kingdom (England) | 1 |
More ▼ |
Laws, Policies, & Programs
Aid to Families with… | 2 |
Elementary and Secondary… | 1 |
Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Wolbring, Tobias – Evaluation Review, 2012
Background: Many university departments use students' evaluations of teaching (SET) to compare and rank courses. However, absenteeism from class is often nonrandom and, therefore, SET for different courses might not be comparable. Objective: The present study aims to answer two questions. Are SET positively biased due to absenteeism? Do…
Descriptors: Research Design, Teacher Effectiveness, Student Evaluation of Teacher Performance, Attendance
Moreno, Lorenzo; Trevino, Ernesto; Yoshikawa, Hirokazu; Mendive, Susana; Reyes, Joaquin; Godoy, Felipe; Del Rio, Francisca; Snow, Catherine; Leyva, Diana; Barata, Clara; Arbour, MaryCatherine; Rolla, Andrea – Evaluation Review, 2011
Evaluation designs for social programs are developed assuming minimal or no disruption from external shocks, such as natural disasters. This is because extremely rare shocks may not make it worthwhile to account for them in the design. Among extreme shocks is the 2010 Chile earthquake. Un Buen Comienzo (UBC), an ongoing early childhood program in…
Descriptors: Research Design, Natural Disasters, Foreign Countries, Early Childhood Education
Merrall, Elizabeth L. C.; Bird, Sheila M. – Evaluation Review, 2009
Recent meta-analyses of drug-court studies recognized the poor methodological quality of the evaluations, with only a few being randomized. This article critiques the design of the randomized studies from a statistical perspective. Learning points are identified for future drug-court studies and are applicable to evaluations both of other…
Descriptors: Foreign Countries, Research Methodology, Research Design, Evaluation Problems
Merrall, Elizabeth L. C.; Dhami, Mandeep K.; Bird, Sheila M. – Evaluation Review, 2010
The determinants of sentencing are of much interest in criminal justice and legal research. Understanding the determinants of sentencing decisions is important for ensuring transparent, consistent, and justifiable sentencing practice that adheres to the goals of sentencing, such as the punishment, rehabilitation, deterrence, and incapacitation of…
Descriptors: Research Design, Research Methodology, Court Litigation, Social Justice
Emery, Sherry; Lee, Jungwha; Curry, Susan J.; Johnson, Tim; Sporer, Amy K.; Mermelstein, Robin; Flay, Brian; Warnecke, Richard – Evaluation Review, 2010
Background: Surveys of community-based programs are difficult to conduct when there is virtually no information about the number or locations of the programs of interest. This article describes the methodology used by the Helping Young Smokers Quit (HYSQ) initiative to identify and profile community-based youth smoking cessation programs in the…
Descriptors: Smoking, Research Methodology, Community Programs, Community Surveys
Schochet, Peter Z. – Evaluation Review, 2009
In social policy evaluations, the multiple testing problem occurs due to the many hypothesis tests that are typically conducted across multiple outcomes and subgroups, which can lead to spurious impact findings. This article discusses a framework for addressing this problem that balances Types I and II errors. The framework involves specifying…
Descriptors: Policy, Evaluation, Testing Problems, Hypothesis Testing
Tremper, Charles; Thomas, Sue; Wagenaar, Alexander C. – Evaluation Review, 2010
Evaluations that combine social science and law have tremendous potential to illuminate the effects of governmental policies and yield insights into how effectively policy makers' efforts achieve their aims. This potential is infrequently achieved, however, because such interdisciplinary research contains often overlooked substantive and…
Descriptors: Evaluation Research, Interdisciplinary Approach, Social Sciences, Research Methodology

Bickman, Leonard – Evaluation Review, 1985
An evaluation system which would describe and assess statewide services for preschool children is described. Component theory conceptualizes the unit of analysis for evaluation as the component. This approach increases the generalizability and utilization of evaluations and enhances the ability to evaluate several programs at the state level.…
Descriptors: Early Childhood Education, Evaluation Methods, Evaluation Utilization, Formative Evaluation

Chen, Huey-Tsyh; Rossi, Peter H. – Evaluation Review, 1983
The use of theoretical models in impact assessment can heighten the power of experimental designs and compensate for some deficiencies of quasi-experimental designs. Theoretical models of implementation processes are examined, arguing that these processes are a major obstacle to fully effective programs. (Author/CM)
Descriptors: Evaluation Criteria, Evaluation Methods, Models, Program Evaluation

Hedrick, Terry E.; Shipman, Stephanie L. – Evaluation Review, 1988
Changes made in 1981 to the Aid to Families with Dependent Children (AFDC) program under the Omnibus Budget Reconciliation Act were evaluated. Multiple quasi-experimental designs (interrupted time series, non-equivalent comparison groups, and simple pre-post designs) used to address evaluation questions illustrate the issues faced by evaluators in…
Descriptors: Evaluation Methods, Program Evaluation, Quasiexperimental Design, Research Design

Marconi, Katherine M.; Rudzinski, Karen A. – Evaluation Review, 1995
A formative evaluation model is proposed for use by administrators of large health services research grant programs. The model assists in assessing the purpose, methodology, and level of analysis of funded research. It is illustrated through a discussion of HIV/AIDS care. (SLD)
Descriptors: Acquired Immune Deficiency Syndrome, Administrators, Evaluation Methods, Formative Evaluation

Heath, Linda; And Others – Evaluation Review, 1982
A problem for program evaluators involves a search for ways to maximize internal validity and inferential power of research designs while being able to assess long-term effects of social programs. A multimethodological research strategy combining a delayed control group true experiment with a multiple time series and switching replications design…
Descriptors: Control Groups, Evaluation Methods, Intervention, Program Evaluation

Chelimsky, Eleanor – Evaluation Review, 1985
Four aspects of the relationship between auditing and evaluation in their approaches to program assessment are examined: (1) their different origins; (2) the definitions and purposes of both, and the questions they seek to answer; (3) contrasting viewpoints and emphases of auditors and evaluators; and (4) commonalities of interest and potential…
Descriptors: Accountability, Accounting, Data Analysis, Evaluation Methods

St.Pierre, Robert G. – Evaluation Review, 1980
Factors that influence the sample size necessary for longitudinal evaluations include the nature of the evaluation questions, nature of available comparison groups, consistency of the treatment in different sites, effect size, attrition rate, significance level for statistical tests, and statistical power. (Author/GDC)
Descriptors: Evaluation Methods, Field Studies, Influences, Longitudinal Studies

Heilman, John G. – Evaluation Review, 1980
The choice between experimental research or process-oriented oriented research as the only valid paradigm of evaluation research is rejected. It is argued that there is a middle ground. Suggestions are made for mixing the two approaches to suit particular research settings. (Author/GK)
Descriptors: Evaluation Methods, Evaluative Thinking, Models, Program Evaluation