NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 22 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Rhodes, William – Evaluation Review, 2012
Research synthesis of evaluation findings is a multistep process. An investigator identifies a research question, acquires the relevant literature, codes findings from that literature, and analyzes the coded data to estimate the average treatment effect and its distribution in a population of interest. The process of estimating the average…
Descriptors: Social Sciences, Regression (Statistics), Meta Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Rhodes, William – Evaluation Review, 2010
Regressions that control for confounding factors are the workhorse of evaluation research. When treatment effects are heterogeneous, however, the workhorse regression leads to estimated treatment effects that lack behavioral interpretations even when the selection on observables assumption holds. Regressions that use propensity scores as weights…
Descriptors: Evaluation Research, Computation, Evaluators, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Merrall, Elizabeth L. C.; Dhami, Mandeep K.; Bird, Sheila M. – Evaluation Review, 2010
The determinants of sentencing are of much interest in criminal justice and legal research. Understanding the determinants of sentencing decisions is important for ensuring transparent, consistent, and justifiable sentencing practice that adheres to the goals of sentencing, such as the punishment, rehabilitation, deterrence, and incapacitation of…
Descriptors: Research Design, Research Methodology, Court Litigation, Social Justice
Peer reviewed Peer reviewed
Direct linkDirect link
Hahs-Vaughn, Debbie L.; McWayne, Christine M.; Bulotsky-Shearer, Rebecca J.; Wen, Xiaoli; Faria, Ann-Marie – Evaluation Review, 2011
Complex survey data are collected by means other than simple random samples. This creates two analytical issues: nonindependence and unequal selection probability. Failing to address these issues results in underestimated standard errors and biased parameter estimates. Using data from the nationally representative Head Start Family and Child…
Descriptors: Research Methodology, Disadvantaged Youth, Probability, Early Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Wolbring, Tobias – Evaluation Review, 2012
Background: Many university departments use students' evaluations of teaching (SET) to compare and rank courses. However, absenteeism from class is often nonrandom and, therefore, SET for different courses might not be comparable. Objective: The present study aims to answer two questions. Are SET positively biased due to absenteeism? Do…
Descriptors: Research Design, Teacher Effectiveness, Student Evaluation of Teacher Performance, Attendance
Peer reviewed Peer reviewed
Heilman, John G. – Evaluation Review, 1980
The choice between experimental research or process-oriented oriented research as the only valid paradigm of evaluation research is rejected. It is argued that there is a middle ground. Suggestions are made for mixing the two approaches to suit particular research settings. (Author/GK)
Descriptors: Evaluation Methods, Evaluative Thinking, Models, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Paccagnella, Omar – Evaluation Review, 2006
In multilevel regression, centering the model variables produces effects that are different and sometimes unexpected compared with those in traditional regression analysis. In this article, the main contributions in terms of meaning, assumptions, and effects underlying a multilevel centering solution are reviewed, emphasizing advantages and…
Descriptors: Regression (Statistics), Educational Research, Models, Correlation
Peer reviewed Peer reviewed
Roos, Leslie L., Jr.; Nicol, J. Patrick – Evaluation Review, 1981
Criteria for suitable research designs for use with large databases are suggested and analyzed. The advantages and disadvantages of several types of quasi-experimental designs are compared. Examples are taken from the authors' research with data from the Manitoba Health Services Commission. (Author/AL)
Descriptors: Comparative Analysis, Control Groups, Databases, Experimental Groups
Peer reviewed Peer reviewed
Lanza, Marilyn Lewis; Carifio, James – Evaluation Review, 1992
The validity of vignettes used to elicit subject responses in social science research is examined in a study of the validity of a group of patient assault vignettes seen by 12 persons with experience in patient care. A model for establishing the validity of vignettes is proposed. (SLD)
Descriptors: Construct Validity, Data Collection, Evaluators, Models
Peer reviewed Peer reviewed
Marconi, Katherine M.; Rudzinski, Karen A. – Evaluation Review, 1995
A formative evaluation model is proposed for use by administrators of large health services research grant programs. The model assists in assessing the purpose, methodology, and level of analysis of funded research. It is illustrated through a discussion of HIV/AIDS care. (SLD)
Descriptors: Acquired Immune Deficiency Syndrome, Administrators, Evaluation Methods, Formative Evaluation
Peer reviewed Peer reviewed
Haveman, Robert H. – Evaluation Review, 1986
This article describes the method and the development of microdata simulation modeling over the past two decades. After tracing a brief history of this evaluation method, its problems and prospects are assessed. The effects of this research method on the development of the social sciences are examined. (JAZ)
Descriptors: Computer Science, Computer Simulation, Economic Research, Government (Administrative Body)
Peer reviewed Peer reviewed
Alexander, H. A. – Evaluation Review, 1986
A group of defenses of qualitative evaluation methods is examined, based on a hard relativistic interpretation of the work of Thomas Kuhn. A promising defense of qualitative evaluation may be found in a soft-relativist interpretation of Kuhn's analysis of the nature of scientific discovery. (Author/LMO)
Descriptors: Educational Assessment, Epistemology, Evaluation Criteria, Evaluation Methods
Peer reviewed Peer reviewed
DeYoung, David J.; Conner, Ross F. – Evaluation Review, 1982
Evaluators usually have preconceptions about how decisions are made in social programs and how evaluation results will be used. This article demonstrates how an evaluator's choice of a decision-making model has significant impact on the conduct and fate of the research. (Author/CM)
Descriptors: Concept Formation, Decision Making, Evaluative Thinking, Evaluators
Peer reviewed Peer reviewed
Chen, Huey-Tsyh; Rossi, Peter H. – Evaluation Review, 1983
The use of theoretical models in impact assessment can heighten the power of experimental designs and compensate for some deficiencies of quasi-experimental designs. Theoretical models of implementation processes are examined, arguing that these processes are a major obstacle to fully effective programs. (Author/CM)
Descriptors: Evaluation Criteria, Evaluation Methods, Models, Program Evaluation
Peer reviewed Peer reviewed
Pituch, Keenan A. – Evaluation Review, 1999
Describes procedures that can be used to summarize the effect of schools when interactions between school practice and student background exist. Applies these procedures to a fairly realistic school effects dataset. Highlights the importance of considering differential school effectiveness rather than using a single quantitative indicator. (SLD)
Descriptors: Educational Practices, Elementary Secondary Education, Interaction, Models
Previous Page | Next Page ยป
Pages: 1  |  2