Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 4 |
Descriptor
Research Problems | 36 |
Evaluation Methods | 21 |
Program Evaluation | 20 |
Research Methodology | 20 |
Research Design | 12 |
Statistical Analysis | 7 |
Research Projects | 6 |
Statistical Bias | 5 |
Validity | 5 |
Data Analysis | 4 |
Data Collection | 4 |
More ▼ |
Source
Evaluation Review | 36 |
Author
Kostoff, Ronald N. | 2 |
Aiken, Leona S. | 1 |
Alemi, Farrokh | 1 |
Averch, Harvey A. | 1 |
Bayer, Ronald, Ed. | 1 |
Bengston, David N. | 1 |
Burghardt, John | 1 |
Chubin, Daryl E. | 1 |
Cook, Stuart W. | 1 |
Darcy, Robert E. | 1 |
Dennis, Michael L. | 1 |
More ▼ |
Publication Type
Journal Articles | 36 |
Reports - Evaluative | 15 |
Reports - Research | 15 |
Opinion Papers | 5 |
Reports - Descriptive | 3 |
Information Analyses | 2 |
Book/Product Reviews | 1 |
Collected Works - General | 1 |
Education Level
Adult Education | 2 |
Audience
Researchers | 1 |
Location
Massachusetts | 1 |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Rhodes, William – Evaluation Review, 2012
Research synthesis of evaluation findings is a multistep process. An investigator identifies a research question, acquires the relevant literature, codes findings from that literature, and analyzes the coded data to estimate the average treatment effect and its distribution in a population of interest. The process of estimating the average…
Descriptors: Social Sciences, Regression (Statistics), Meta Analysis, Models
Tremper, Charles; Thomas, Sue; Wagenaar, Alexander C. – Evaluation Review, 2010
Evaluations that combine social science and law have tremendous potential to illuminate the effects of governmental policies and yield insights into how effectively policy makers' efforts achieve their aims. This potential is infrequently achieved, however, because such interdisciplinary research contains often overlooked substantive and…
Descriptors: Evaluation Research, Interdisciplinary Approach, Social Sciences, Research Methodology

Kostoff, Ronald N.; And Others – Evaluation Review, 1994
Articles in this special issue deal with the assessment of the impact of research and are divided into segments concerning semiquantitative approaches; qualitative approaches; and quantitative and fiscal approaches. These articles illustrate the importance of the role of motivation and associated incentives. (SLD)
Descriptors: Cost Effectiveness, Economic Factors, Evaluation Methods, Evaluation Utilization

Heilman, John G. – Evaluation Review, 1980
The choice between experimental research or process-oriented oriented research as the only valid paradigm of evaluation research is rejected. It is argued that there is a middle ground. Suggestions are made for mixing the two approaches to suit particular research settings. (Author/GK)
Descriptors: Evaluation Methods, Evaluative Thinking, Models, Program Evaluation

Kostoff, Ronald N. – Evaluation Review, 1994
Strengths and weaknesses of three types of semiquantitative methods used by the federal government in research impact assessment are presented, and examples of their use are reviewed. These include the classic retrospective method, another retrospective approach, and accomplishments books used by selected research-sponsoring organizations. (SLD)
Descriptors: Cost Effectiveness, Evaluation Methods, Evaluation Utilization, Federal Government

Nagel, Stuart S. – Evaluation Review, 1984
Introspective interviewing can often determine the magnitude of relations more meaningfully than statistical analysis. Deduction from empirically validated premises avoids many research design problems. Guesswork can be combined with sensitivity analysis to determine the effects of guesses and missing information on conclusions. (Author/DWH)
Descriptors: Deduction, Evaluation Methods, Intuition, Policy Formation

Alemi, Farrokh – Evaluation Review, 1987
Trade-offs are implicit in choosing a subjective or objective method for evaluating social programs. The differences between Bayesian and traditional statistics, decision and cost-benefit analysis, and anthropological and traditional case systems illustrate trade-offs in choosing methods because of limited resources. (SLD)
Descriptors: Bayesian Statistics, Case Studies, Evaluation Methods, Program Evaluation

Dennis, Michael L. – Evaluation Review, 1990
Six potential problems with the use of randomized experiments to evaluate programs in the field are addressed. Problems include treatment dilution, treatment contamination or confounding, inaccurate case flow and power estimates, violations of the random assignment processes, changes in the environmental context, and changes in the treatment…
Descriptors: Drug Rehabilitation, Evaluation Problems, Experiments, Field Studies

Hennessy, Michael; Saltz, Robert F. – Evaluation Review, 1989
A beverage-server intervention project at two West Coast Navy bases that attempted to reduce levels of alcoholic intoxication via policy changes and server training is described. Data obtained via interviews and structured observations of 1,511 club customers indicate methodological bias and self-selection effects. Bias adjustments were performed…
Descriptors: Alcohol Abuse, Clubs, Dining Facilities, Enlisted Personnel

Murray, David M.; And Others – Evaluation Review, 1994
This article presents a synopsis of each of seven presentations given at a conference on design and analysis in community trial studies. Papers identify problems with community trials and discuss strengths and weaknesses associated with design and analysis strategies. Areas of consensus are summarized. (SLD)
Descriptors: Cohort Analysis, Conferences, Evaluation Methods, Intervention

Aiken, Leona S.; West, Stephen G. – Evaluation Review, 1990
The validity of true experiments is threatened by a class of self-report biases that affect all respondents at pretest, but which are diminished by treatment. Four of these inaccurate self-evaluation biases are discussed. Means of detection include external criteria, special conditions of measurement, and retrospective pretests. (TJH)
Descriptors: Bias, Drug Rehabilitation, Evaluation Problems, Experiments

Trochim, William M.K. – Evaluation Review, 1982
Meta-analysis of Title I program evaluations shows the norm-referenced model overestimates positive effectiveness; while the regression-discontinuity design underestimates it. Potential biases include residual regression artifacts, attrition and time-of-testing problems in the norm-referenced design, and assignment, measurement, and data…
Descriptors: Compensatory Education, Data Collection, Elementary Secondary Education, Evaluation Methods

Ormala, Erkki – Evaluation Review, 1994
Trends in European practice that relate to qualitative assessment in the evaluation of the impact of research and innovation are discussed and analyzed. To date, European evaluations have been mainly concerned with quality and direct impact with few assessments of medium- or long-term impact. (SLD)
Descriptors: Data Analysis, Data Collection, Evaluation Methods, Foreign Countries

Moffitt, Robert – Evaluation Review, 1991
Statistical methods for program evaluation with nonexperimental data are reviewed with emphasis on circumstances in which nonexperimental data are valid. Three solutions are proposed for problems of selection bias, and implications for evaluation design and data collection and analysis are discussed. (SLD)
Descriptors: Bias, Cohort Analysis, Equations (Mathematics), Estimation (Mathematics)

Averch, Harvey A. – Evaluation Review, 1994
This article reviews the principal methods economists and cost benefit analysts use in evaluating research. Two common approaches are surplus measures (combinations of consumer and producer surpluses) and productivity measures. Technical difficulties and political and organizational constraints are discussed for these measures. (SLD)
Descriptors: Consumer Economics, Cost Effectiveness, Economic Impact, Evaluation Methods