NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Location
Australia1
Japan1
Laws, Policies, & Programs
No Child Left Behind Act 20011
Assessments and Surveys
Rorschach Test1
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Jiyeo Yun – English Teaching, 2023
Studies on automatic scoring systems in writing assessments have also evaluated the relationship between human and machine scores for the reliability of automated essay scoring systems. This study investigated the magnitudes of indices for inter-rater agreement and discrepancy, especially regarding human and machine scoring, in writing assessment.…
Descriptors: Meta Analysis, Interrater Reliability, Essays, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Donaldson, Morgaen L.; Firestone, William – Journal of Educational Change, 2021
Teacher evaluation's relationship with instructional improvement is under-theorized in the literature. To address this gap, this paper uses a conceptual framework rooted in human, social, and material capital to analyze and synthesize findings from research conducted since 2009 on whether and under what conditions teacher evaluation stimulates…
Descriptors: Human Capital, Social Capital, Cultural Capital, Educational Change
Peer reviewed Peer reviewed
Direct linkDirect link
Saito, Kazuya; Plonsky, Luke – Language Learning, 2019
We propose a new framework for conceptualizing measures of instructed second language (L2) pronunciation performance according to three sets of parameters: (a) the constructs (focused on global vs. specific aspects of pronunciation), (b) the scoring method (human raters vs. acoustic analyses), and (c) the type of knowledge elicited (controlled vs.…
Descriptors: Second Language Learning, Second Language Instruction, Scoring, Pronunciation Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Mihura, Joni L.; Meyer, Gregory J.; Dumitrascu, Nicolae; Bombel, George – Psychological Bulletin, 2013
We systematically evaluated the peer-reviewed Rorschach validity literature for the 65 main variables in the popular Comprehensive System (CS). Across 53 meta-analyses examining variables against externally assessed criteria (e.g., observer ratings, psychiatric diagnosis), the mean validity was r = 0.27 (k = 770) as compared to r = 0.08 (k = 386)…
Descriptors: Validity, Criteria, Measurement Techniques, Peer Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Sims, Wendy, Ed. – Journal of Research in Music Education, 2009
This article presents a study conducted by Cornelia Yarbrough and Jennifer Whitaker titled "Analysis of Reviewer Comments About Quantitative Manuscripts Accepted by the "Journal of Research in Music Education"." The study aims to analyze reviewers' comments for quantitative manuscripts with regard to the following categories: section discussed…
Descriptors: Music Education, Interrater Reliability, Evaluators, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Compton, Donald W. – New Directions for Evaluation, 2009
On the basis of a multistage exploration of evaluation texts, electronic searches, and nominations from the field and from managing social science, the author concludes there is little research literature on managing evaluation studies, evaluators and other workers, and evaluation units. The discussion explores what this limited literature tells…
Descriptors: Evaluators, Literature Reviews, Research Administration, Research Directors
Peer reviewed Peer reviewed
Direct linkDirect link
Holbrook, Allyson; Bourke, Sid; Lovat, Terry; Fairbairn, Hedy – Australian Journal of Education, 2008
This is a mixed methods investigation of consistency in PhD examination. At its core is the quantification of the content and conceptual analysis of examiner reports for 804 Australian theses. First, the level of consistency between what examiners say in their reports and the recommendation they provide for a thesis is explored, followed by an…
Descriptors: Academic Standards, Examiners, Student Evaluation, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Honekopp, Johannes; Becker, Betsy Jane; Oswald, Frederick L. – Psychological Methods, 2006
Four types of analysis are commonly applied to data from structured Rater [times] Ratee designs. These types are characterized by the unit of analysis, which is either raters or ratees, and by the design used, which is either between-units or within-unit design. The 4 types of analysis are quite different, and therefore they give rise to effect…
Descriptors: Meta Analysis, Effect Size, Data Analysis, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Saito, Hidetoshi – Language Testing, 2008
This study examined the effects of training on peer assessment and comments provided regarding oral presentations in EFL (English as a Foreign Language) classrooms. In Study 1, both the treatment and control groups received instruction on skill aspects, but only the treatment group was given an additional 40-minute training on how to rate…
Descriptors: Control Groups, Student Attitudes, Peer Evaluation, English (Second Language)
Peer reviewed Peer reviewed
Heilman, John G. – Evaluation Review, 1983
The article suggests supplementing the social problem study groups and data synthesis approaches to building knowledge with emphasis on synthesizing reviews. Such reviews would strengthen the evaluation profession and promote utilization of the knowledge base generated by evaluators. (DWH)
Descriptors: Evaluation, Evaluation Utilization, Evaluators, Literature Reviews
Ford, J. Kevin; And Others – 1985
From a cognitive perspective, racial bias is evident when raters weigh job-relevant information differentially as a function of ratee race. The results of studies that have examined this issue have been conflicting. Meta-analytic procedures were used to provide more definitive conclusions as to whether supervisor ratings are more strongly related…
Descriptors: Evaluation Criteria, Evaluators, Job Performance, Knowledge Level
Orwin, Robert G. – New Directions for Testing and Measurement, 1985
The manner in which results and methods are reported influences the ability of the synthesis of prior studies for planning new evaluations. Confidence ratings, coding conventions, and supplemental evidence can partially overcome the difficulties. Planners must acknowledge the influence of their own judgement in using prior research. (Author)
Descriptors: Decision Making, Evaluation Methods, Evaluators, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Oppenheimer, Todd – Education Next, 2007
Educational software makers are often rebuffed by educational authorities, whose endorsements could lead to governmental stamps of approval, and thus explosive sales. But they usually get warmer receptions in the offices of the nation's school superintendents, who are, after all, their primary customers. The system was not supposed to work this…
Descriptors: Federal Legislation, Vendors, Instructional Materials, Computer Software
Senechal, Monique – National Institute for Literacy, 2006
Goal: Educators believe that parents can help their children learn to read. But what evidence supports this belief? And if parent involvement does matter, what kinds of parent involvement are most efficient? The goal of this report was to review the scientific literature on parent involvement in the acquisition of reading from kindergarten to…
Descriptors: Scientific Research, Reading Instruction, Evaluators, Family Literacy
Peer reviewed Peer reviewed
Direct linkDirect link
Patton, Michael Quinn – New Directions for Evaluation, 2005
The purpose of this concluding chapter has been to stimulate creative thinking about how to use cases for evaluation teaching and training. The preparation of professional evaluators presents special challenges. Imaginative new teaching and training resources are appearing, as evidenced by the special sections on evaluation teaching that now…
Descriptors: Case Method (Teaching Technique), Ethics, Creative Teaching, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1  |  2