NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ramachandran, Lakshmi; Gehringer, Edward F.; Yadav, Ravi K. – International Journal of Artificial Intelligence in Education, 2017
A "review" is textual feedback provided by a reviewer to the author of a submitted version. Peer reviews are used in academic publishing and in education to assess student work. While reviews are important to e-commerce sites like Amazon and e-bay, which use them to assess the quality of products and services, our work focuses on…
Descriptors: Natural Language Processing, Peer Evaluation, Educational Quality, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Mihura, Joni L.; Meyer, Gregory J.; Dumitrascu, Nicolae; Bombel, George – Psychological Bulletin, 2013
We systematically evaluated the peer-reviewed Rorschach validity literature for the 65 main variables in the popular Comprehensive System (CS). Across 53 meta-analyses examining variables against externally assessed criteria (e.g., observer ratings, psychiatric diagnosis), the mean validity was r = 0.27 (k = 770) as compared to r = 0.08 (k = 386)…
Descriptors: Validity, Criteria, Measurement Techniques, Peer Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Gegenfurtner, Andreas – Educational Research Review, 2011
This meta-analysis (148 studies, k = 197, N = 31,718) examined the relationship between motivation and transfer in professional training. For this purpose, motivation was conceptualized in the following nine dimensions: motivation to learn, motivation to transfer, pre- and post-training self-efficacy, mastery orientation, performance orientation,…
Descriptors: Self Efficacy, Professional Training, Student Motivation, Learning Motivation
Graham, Steve; Harris, Karen; Hebert, Michael – Carnegie Corporation of New York, 2011
During this decade there have been numerous efforts to identify instructional practices that improve students' writing. These include "Reading Next" (Biancarosa and Snow, 2004), which provided a set of instructional recommendations for improving writing, and "Writing Next" (Graham and Perin, 2007) and "Writing to Read" (Graham and Hebert, 2010),…
Descriptors: Writing Evaluation, Formative Evaluation, Writing Improvement, Writing Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Saito, Hidetoshi – Language Testing, 2008
This study examined the effects of training on peer assessment and comments provided regarding oral presentations in EFL (English as a Foreign Language) classrooms. In Study 1, both the treatment and control groups received instruction on skill aspects, but only the treatment group was given an additional 40-minute training on how to rate…
Descriptors: Control Groups, Student Attitudes, Peer Evaluation, English (Second Language)
Peer reviewed Peer reviewed
Falchikov, Nancy; Goldfinch, Judy – Review of Educational Research, 2000
Subjected 48 quantitative peer assessment studies that compared peer and teacher marks to meta-analysis. Peer assessments were found to resemble teacher assessments more closely when global judgments based on well understood criteria were used rather than when marking involved assessing several individual dimensions. (Author/SLD)
Descriptors: College Faculty, College Students, Comparative Analysis, Criteria
Peer reviewed Peer reviewed
Harris, Michael M.; Schaubroeck, John – Personnel Psychology, 1988
Used meta-analysis to review the literature regarding the correlation between self-supervisor, self-peer, and peer-supervisor ratings. Results indicated a relatively high correlation between peer and supervisor ratings, but only a moderate correlation between self-supervisor and self-peer ratings, with job type seeming to moderate self-peer and…
Descriptors: Employee Attitudes, Employees, Employer Attitudes, Evaluation Methods
Peer reviewed Peer reviewed
Goldman, Ronald L. – Evaluation and the Health Professions, 1994
A meta-analysis of studies examining the interrater reliability of the standard practice of peer assessments of quality of care was conducted through the use of several databases. The mean weighted kappa of 21 findings from 13 studies was 0.31, which suggests that the interrater reliability of peer assessment is limited. (SLD)
Descriptors: Databases, Evaluation Methods, Health Services, Interrater Reliability