NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 24 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Beseiso, Majdi; Alzubi, Omar A.; Rashaideh, Hasan – Journal of Computing in Higher Education, 2021
E-learning is gradually gaining prominence in higher education, with universities enlarging provision and more students getting enrolled. The effectiveness of automated essay scoring (AES) is thus holding a strong appeal to universities for managing an increasing learning interest and reducing costs associated with human raters. The growth in…
Descriptors: Automation, Scoring, Essays, Writing Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Doewes, Afrizal; Kurdhi, Nughthoh Arfawi; Saxena, Akrati – International Educational Data Mining Society, 2023
Automated Essay Scoring (AES) tools aim to improve the efficiency and consistency of essay scoring by using machine learning algorithms. In the existing research work on this topic, most researchers agree that human-automated score agreement remains the benchmark for assessing the accuracy of machine-generated scores. To measure the performance of…
Descriptors: Essays, Writing Evaluation, Evaluators, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Humphry, Stephen Mark; Heldsinger, Sandy – Journal of Educational Measurement, 2019
To capitalize on professional expertise in educational assessment, it is desirable to develop and test methods of rater-mediated assessment that enable classroom teachers to make reliable and informative judgments. Accordingly, this article investigates the reliability of a two-stage method used by classroom teachers to assess primary school…
Descriptors: Essays, Elementary School Students, Writing (Composition), Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Sanders, Joe Sutliff – Children's Literature in Education, 2015
A recent surge of conversation about children's nonfiction reveals a conflict between two positions that do not at first appear to be opposed: modeling inquiry and presenting authoritative facts. Tanya Lee Stone, the author of the Sibert Award-winning "Almost Astronauts" (2009), has recently alluded to that tension and expressed a…
Descriptors: Childrens Literature, Nonfiction, Authors, Inquiry
Peer reviewed Peer reviewed
Direct linkDirect link
Nehm, Ross H.; Haertig, Hendrik – Journal of Science Education and Technology, 2012
Our study examines the efficacy of Computer Assisted Scoring (CAS) of open-response text relative to expert human scoring within the complex domain of evolutionary biology. Specifically, we explored whether CAS can diagnose the explanatory elements (or Key Concepts) that comprise undergraduate students' explanatory models of natural selection with…
Descriptors: Evolution, Undergraduate Students, Interrater Reliability, Computers
Peer reviewed Peer reviewed
Direct linkDirect link
Brown, Gavin T. L. – Higher Education Quarterly, 2010
The use of timed, essay examinations is a well-established means of evaluating student learning in higher education. The reliability of essay scoring is highly problematic and it appears that essay examination grades are highly dependent on language and organisational components of writing. Computer-assisted scoring of essays makes use of language…
Descriptors: Higher Education, Essay Tests, Validity, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – Applied Linguistics, 2010
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multi-trait) rating dimensions and their relationships to holistic scores and "e-rater"[R] essay feature variables in the context of the TOEFL[R] computer-based test (TOEFL CBT) writing assessment. Data analyzed in the study were holistic…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Coniam, David – Educational Research and Evaluation, 2009
This paper describes a study comparing paper-based marking (PBM) and onscreen marking (OSM) in Hong Kong utilising English language essay scripts drawn from the live 2007 Hong Kong Certificate of Education Examination (HKCEE) Year 11 English Language Writing Paper. In the study, 30 raters from the 2007 HKCEE Writing Paper marked on paper 100…
Descriptors: Student Attitudes, Foreign Countries, Essays, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Grimes, Douglas; Warschauer, Mark – Journal of Technology, Learning, and Assessment, 2010
Automated writing evaluation (AWE) software uses artificial intelligence (AI) to score student essays and support revision. We studied how an AWE program called MY Access![R] was used in eight middle schools in Southern California over a three-year period. Although many teachers and students considered automated scoring unreliable, and teachers'…
Descriptors: Automation, Writing Evaluation, Essays, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Shaw, Stuart – E-Learning, 2008
Computer-assisted assessment offers many benefits over traditional paper methods. However, in transferring from one medium to another, it is crucial to ascertain the extent to which the new medium may alter the nature of traditional assessment practice or affect marking reliability. Whilst there is a substantial body of research comparing marking…
Descriptors: Construct Validity, Writing Instruction, Computer Assisted Testing, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Schaefer, Edward – Language Testing, 2008
The present study employed multi-faceted Rasch measurement (MFRM) to explore the rater bias patterns of native English-speaker (NES) raters when they rate EFL essays. Forty NES raters rated 40 essays written by female Japanese university students on a single topic adapted from the TOEFL Test of Written English (TWE). The essays were assessed using…
Descriptors: Writing Evaluation, Writing Tests, Program Effectiveness, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Barkaoui, Khaled – Assessing Writing, 2007
Educators often have to choose among different types of rating scales to assess second-language (L2) writing performance. There is little research, however, on how different rating scales affect rater performance. This study employed a mixed-method approach to investigate the effects of two different rating scales on EFL essay scores, rating…
Descriptors: Writing Evaluation, Writing Tests, Rating Scales, Essays
Peer reviewed Peer reviewed
Miller, Jeff – College Teaching, 1999
A college faculty member who has graded Advanced Placement exam essays on U.S. government and politics, taken mostly by high school juniors and seniors, suggests that high school teachers and college faculty who assess the essays are not the best qualified persons to do so and that despite efforts to ensure consistency, the resulting scores are…
Descriptors: Advanced Placement, College Instruction, Essays, Evaluation Criteria
Henning, Grant – 1992
The psychometric characteristics of the Test of Written English (TWE) rating scale were explored. Rasch model scalar analysis methodology was employed with more than 4,000 scored essays across 2 elicitation prompts to gather information about the rating scale and rating process. Results suggested that the intervals between TWE scale steps were…
Descriptors: English (Second Language), Equated Scores, Essays, Interrater Reliability
Elander, James – Psychology Teaching Review, 2002
This article describes the development of assessment criteria for specific aspects of examination answers and coursework essays in psychology. The criteria specified the standards expected for seven aspects of students' work: addressing the question, covering the area, understanding the material, evaluating the material, developing arguments,…
Descriptors: Foreign Countries, Test Construction, Criteria, Item Analysis
Previous Page | Next Page ยป
Pages: 1  |  2