NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Evaluation Review475
What Works Clearinghouse Rating
Does not meet standards1
Showing 196 to 210 of 475 results Save | Export
Peer reviewed Peer reviewed
Kostoff, Ronald N. – Evaluation Review, 1994
The use of peer review for federal research impact evaluation is described. Advanced review processes can improve the efficiency of a review, but the most important factors in a quality review are leader motivation and the competence and independence of review team members. No single method provides complete impact evaluation. (SLD)
Descriptors: Competence, Cost Effectiveness, Evaluation Methods, Federal Government
Peer reviewed Peer reviewed
Krull, Jennifer L.; MacKinnon, David P. – Evaluation Review, 1999
Proposes and evaluates a method to test for mediation in multilevel data sets formed when an intervention administered to groups is designed to produce change in individual mediator and outcome variables. Applies the method to the ATLAS intervention designed to decrease steroid use among high school football players. (SLD)
Descriptors: Athletes, Change, Drug Use, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Peck, Laura R. – Evaluation Review, 2005
The conventional way to measure program impacts is to compute the average treatment effect; that is, the difference between a treatment group that received some intervention and a control group that did not. Recently, scholars have recognized that looking only at the average treatment effect may obscure impacts that accrue to subgroups. In an…
Descriptors: Program Effectiveness, Evaluation Methods, Welfare Recipients, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Blitstein, Jonathan L.; Hannan, Peter J.; Murray, David M.; Shadish, William R. – Evaluation Review, 2005
This study describes a method for incorporating external estimates of intraclass correlation to improve the precision for the analysis of an existing group-randomized trial. The authors use a random-effects meta-analytic approach to pool the information across studies, which takes into account any interstudy heterogeneity that may exist. This…
Descriptors: Freedom, Computation, Correlation, Evaluation Research
Peer reviewed Peer reviewed
Hennessy, Michael; Saltz, Robert F. – Evaluation Review, 1989
A beverage-server intervention project at two West Coast Navy bases that attempted to reduce levels of alcoholic intoxication via policy changes and server training is described. Data obtained via interviews and structured observations of 1,511 club customers indicate methodological bias and self-selection effects. Bias adjustments were performed…
Descriptors: Alcohol Abuse, Clubs, Dining Facilities, Enlisted Personnel
Peer reviewed Peer reviewed
Johnson, Eleanor Liebman – Evaluation Review, 1985
In 1980, five federal special education longitudinal qualitative impact evaluations were all prematurely terminated. What happened, why it happened, and what salvage decisions were made are examined in this article. The impact of these decisions on the design of RFPs and future evaluation plans are also discussed. (Author/EGS)
Descriptors: Elementary Secondary Education, Evaluation Needs, Federal Legislation, Federal Programs
Peer reviewed Peer reviewed
Wortman, Paul M.; Marans, Robert W. – Evaluation Review, 1987
The concept of "preevaluative research" is examined in the context of a museum exhibition evaluation. It is viewed as distinct from an evaluability assessment. The exhibit preevaluative study indicates that instrumentation and implementation issues are likely to benefit from such activities, but that design and analysis may suffer.…
Descriptors: Arts Centers, High Schools, Interviews, Program Evaluation
Peer reviewed Peer reviewed
Dickinson, Katherine P.; And Others – Evaluation Review, 1987
Net impact estimates of Comprehensive Employment and Training Act (CETA) programs vary widely and can be explained by the different evaluation methodologies used. Estimates are sensitive to the inclusion of recently unemployed persons in the comparison sample and assumptions about the time of decision to enroll in CETA. (GDC)
Descriptors: Adult Education, Effect Size, Employment Programs, Evaluation Methods
Peer reviewed Peer reviewed
Trochim, William M. K.; Davis, James E. – Evaluation Review, 1986
Microcomputer simulations in evaluation research are useful for (1) improving student understanding of research principles and analytic techniques; (2) investigating problems arising in research implementations; and (3) exploring the accuracy and utility of novel analytic techniques. This article describes these simulation uses for the context of…
Descriptors: Computer Assisted Instruction, Computer Simulation, Computer Software, Evaluation Methods
Peer reviewed Peer reviewed
Mark, Melvin M.; Shatland,d R. Lance – Evaluation Review, 1985
Value judgments are central to the process of stakeholder-based evaluations. The selection of stakeholder participants involves a value judgment about the power and the legitimacy of the stakeholders. Consequences of stakeholder evaluation may include pseudoempowerment. Suggestions for evaluators for improving stakeholder evaluations are made.…
Descriptors: Evaluation Methods, Evaluation Needs, Evaluators, Information Needs
Peer reviewed Peer reviewed
Locke, Thomas P.; And Others – Evaluation Review, 1986
A controlled study of the impact of a juvenile education program on the recidivism rates of juveniles was performed. The program involved introducing the juveniles to prison life. Findings showed that youths categorized as more delinquent were affected differently by program attendance compared to youths categorized as less delinquent. (Author/LMO)
Descriptors: Adolescents, Correctional Institutions, Delinquency, Delinquent Rehabilitation
Peer reviewed Peer reviewed
Connor, Ross F.; And Others – Evaluation Review, 1985
This article focuses on the distinction between needs assessment and demand assessment and presents a methodology for operationalizing and measuring demands. Results are reported of a survey of a national sample of 32 university and college administrators to assess their need and demand for an adult student opinion package. (Author/LMO)
Descriptors: Administrators, Adult Students, Evaluation Needs, Evaluation Utilization
Peer reviewed Peer reviewed
Murray, David M.; And Others – Evaluation Review, 1994
This article presents a synopsis of each of seven presentations given at a conference on design and analysis in community trial studies. Papers identify problems with community trials and discuss strengths and weaknesses associated with design and analysis strategies. Areas of consensus are summarized. (SLD)
Descriptors: Cohort Analysis, Conferences, Evaluation Methods, Intervention
Peer reviewed Peer reviewed
Aiken, Leona S.; West, Stephen G. – Evaluation Review, 1990
The validity of true experiments is threatened by a class of self-report biases that affect all respondents at pretest, but which are diminished by treatment. Four of these inaccurate self-evaluation biases are discussed. Means of detection include external criteria, special conditions of measurement, and retrospective pretests. (TJH)
Descriptors: Bias, Drug Rehabilitation, Evaluation Problems, Experiments
Peer reviewed Peer reviewed
Feldman, Henry A.; And Others – Evaluation Review, 1996
A method is described for increasing residual degrees of freedom in a community experiment without substantially increasing cost or difficulty by dividing experimental subunits into batches. Theoretical advantages of batch sampling are described and illustrated with data from the Pawtucket Heart Health Program. (SLD)
Descriptors: Community Health Services, Costs, Difficulty Level, Evaluation Methods
Pages: 1  |  ...  |  10  |  11  |  12  |  13  |  14  |  15  |  16  |  17  |  18  |  ...  |  32