NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 36 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Rhodes, William – Evaluation Review, 2012
Research synthesis of evaluation findings is a multistep process. An investigator identifies a research question, acquires the relevant literature, codes findings from that literature, and analyzes the coded data to estimate the average treatment effect and its distribution in a population of interest. The process of estimating the average…
Descriptors: Social Sciences, Regression (Statistics), Meta Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Tremper, Charles; Thomas, Sue; Wagenaar, Alexander C. – Evaluation Review, 2010
Evaluations that combine social science and law have tremendous potential to illuminate the effects of governmental policies and yield insights into how effectively policy makers' efforts achieve their aims. This potential is infrequently achieved, however, because such interdisciplinary research contains often overlooked substantive and…
Descriptors: Evaluation Research, Interdisciplinary Approach, Social Sciences, Research Methodology
Peer reviewed Peer reviewed
Kostoff, Ronald N.; And Others – Evaluation Review, 1994
Articles in this special issue deal with the assessment of the impact of research and are divided into segments concerning semiquantitative approaches; qualitative approaches; and quantitative and fiscal approaches. These articles illustrate the importance of the role of motivation and associated incentives. (SLD)
Descriptors: Cost Effectiveness, Economic Factors, Evaluation Methods, Evaluation Utilization
Peer reviewed Peer reviewed
Heilman, John G. – Evaluation Review, 1980
The choice between experimental research or process-oriented oriented research as the only valid paradigm of evaluation research is rejected. It is argued that there is a middle ground. Suggestions are made for mixing the two approaches to suit particular research settings. (Author/GK)
Descriptors: Evaluation Methods, Evaluative Thinking, Models, Program Evaluation
Peer reviewed Peer reviewed
Kostoff, Ronald N. – Evaluation Review, 1994
Strengths and weaknesses of three types of semiquantitative methods used by the federal government in research impact assessment are presented, and examples of their use are reviewed. These include the classic retrospective method, another retrospective approach, and accomplishments books used by selected research-sponsoring organizations. (SLD)
Descriptors: Cost Effectiveness, Evaluation Methods, Evaluation Utilization, Federal Government
Peer reviewed Peer reviewed
Nagel, Stuart S. – Evaluation Review, 1984
Introspective interviewing can often determine the magnitude of relations more meaningfully than statistical analysis. Deduction from empirically validated premises avoids many research design problems. Guesswork can be combined with sensitivity analysis to determine the effects of guesses and missing information on conclusions. (Author/DWH)
Descriptors: Deduction, Evaluation Methods, Intuition, Policy Formation
Peer reviewed Peer reviewed
Alemi, Farrokh – Evaluation Review, 1987
Trade-offs are implicit in choosing a subjective or objective method for evaluating social programs. The differences between Bayesian and traditional statistics, decision and cost-benefit analysis, and anthropological and traditional case systems illustrate trade-offs in choosing methods because of limited resources. (SLD)
Descriptors: Bayesian Statistics, Case Studies, Evaluation Methods, Program Evaluation
Peer reviewed Peer reviewed
Dennis, Michael L. – Evaluation Review, 1990
Six potential problems with the use of randomized experiments to evaluate programs in the field are addressed. Problems include treatment dilution, treatment contamination or confounding, inaccurate case flow and power estimates, violations of the random assignment processes, changes in the environmental context, and changes in the treatment…
Descriptors: Drug Rehabilitation, Evaluation Problems, Experiments, Field Studies
Peer reviewed Peer reviewed
Hennessy, Michael; Saltz, Robert F. – Evaluation Review, 1989
A beverage-server intervention project at two West Coast Navy bases that attempted to reduce levels of alcoholic intoxication via policy changes and server training is described. Data obtained via interviews and structured observations of 1,511 club customers indicate methodological bias and self-selection effects. Bias adjustments were performed…
Descriptors: Alcohol Abuse, Clubs, Dining Facilities, Enlisted Personnel
Peer reviewed Peer reviewed
Murray, David M.; And Others – Evaluation Review, 1994
This article presents a synopsis of each of seven presentations given at a conference on design and analysis in community trial studies. Papers identify problems with community trials and discuss strengths and weaknesses associated with design and analysis strategies. Areas of consensus are summarized. (SLD)
Descriptors: Cohort Analysis, Conferences, Evaluation Methods, Intervention
Peer reviewed Peer reviewed
Aiken, Leona S.; West, Stephen G. – Evaluation Review, 1990
The validity of true experiments is threatened by a class of self-report biases that affect all respondents at pretest, but which are diminished by treatment. Four of these inaccurate self-evaluation biases are discussed. Means of detection include external criteria, special conditions of measurement, and retrospective pretests. (TJH)
Descriptors: Bias, Drug Rehabilitation, Evaluation Problems, Experiments
Peer reviewed Peer reviewed
Trochim, William M.K. – Evaluation Review, 1982
Meta-analysis of Title I program evaluations shows the norm-referenced model overestimates positive effectiveness; while the regression-discontinuity design underestimates it. Potential biases include residual regression artifacts, attrition and time-of-testing problems in the norm-referenced design, and assignment, measurement, and data…
Descriptors: Compensatory Education, Data Collection, Elementary Secondary Education, Evaluation Methods
Peer reviewed Peer reviewed
Ormala, Erkki – Evaluation Review, 1994
Trends in European practice that relate to qualitative assessment in the evaluation of the impact of research and innovation are discussed and analyzed. To date, European evaluations have been mainly concerned with quality and direct impact with few assessments of medium- or long-term impact. (SLD)
Descriptors: Data Analysis, Data Collection, Evaluation Methods, Foreign Countries
Peer reviewed Peer reviewed
Moffitt, Robert – Evaluation Review, 1991
Statistical methods for program evaluation with nonexperimental data are reviewed with emphasis on circumstances in which nonexperimental data are valid. Three solutions are proposed for problems of selection bias, and implications for evaluation design and data collection and analysis are discussed. (SLD)
Descriptors: Bias, Cohort Analysis, Equations (Mathematics), Estimation (Mathematics)
Peer reviewed Peer reviewed
Averch, Harvey A. – Evaluation Review, 1994
This article reviews the principal methods economists and cost benefit analysts use in evaluating research. Two common approaches are surplus measures (combinations of consumer and producer surpluses) and productivity measures. Technical difficulties and political and organizational constraints are discussed for these measures. (SLD)
Descriptors: Consumer Economics, Cost Effectiveness, Economic Impact, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1  |  2  |  3