NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Every Student Succeeds Act…2
What Works Clearinghouse Rating
Showing 1 to 15 of 63 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Van Meenen, Florence; Coertjens, Liesje; Van Nes, Marie-Claire; Verschuren, Franck – Advances in Health Sciences Education, 2022
The present study explores two rating methods for peer assessment (analytical rating using criteria and comparative judgement) in light of concurrent validity, reliability and insufficient diagnosticity (i.e. the degree to which substandard work is recognised by the peer raters). During a second-year undergraduate course, students wrote a one-page…
Descriptors: Evaluation Methods, Peer Evaluation, Accuracy, Evaluation Criteria
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wendler, Cathy; Glazer, Nancy; Cline, Frederick – ETS Research Report Series, 2019
One of the challenges in scoring constructed-response (CR) items and tasks is ensuring that rater drift does not occur during or across scoring windows. Rater drift reflects changes in how raters interpret and use established scoring criteria to assign essay scores. Calibration is a process used to help control rater drift and, as such, serves as…
Descriptors: College Entrance Examinations, Graduate Study, Accuracy, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Beseiso, Majdi; Alzubi, Omar A.; Rashaideh, Hasan – Journal of Computing in Higher Education, 2021
E-learning is gradually gaining prominence in higher education, with universities enlarging provision and more students getting enrolled. The effectiveness of automated essay scoring (AES) is thus holding a strong appeal to universities for managing an increasing learning interest and reducing costs associated with human raters. The growth in…
Descriptors: Automation, Scoring, Essays, Writing Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Elif Sari – International Journal of Assessment Tools in Education, 2024
Employing G-theory and rater interviews, the study investigated how a high-stakes writing assessment procedure (i.e., a single-task, single-rater, and holistic scoring procedure) impacted the variability and reliability of its scores within the Turkish higher education context. Thirty-two essays written on two different writing tasks (i.e.,…
Descriptors: Foreign Countries, High Stakes Tests, Writing Evaluation, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Ramon-Casas, Marta; Nuño, Neus; Pons, Ferran; Cunillera, Toni – Assessment & Evaluation in Higher Education, 2019
This article presents an empirical evaluation of the validity and reliability of a peer-assessment activity to improve academic writing competences. Specifically, we explored a large group of psychology undergraduate students with different initial writing skills. Participants (n = 365) produced two different essays, which were evaluated by their…
Descriptors: Peer Evaluation, Validity, Reliability, Writing Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Pruchnic, Jeff; Barton, Ellen; Primeau, Sarah; Trimble, Thomas; Varty, Nicole; Foster, Tanina – Composition Forum, 2021
Over the past two decades, reflective writing has occupied an increasingly prominent position in composition theory, pedagogy, and assessment as researchers have described the value of reflection and reflective writing in college students' development of higher-order writing skills, such as genre conventions (Yancey, "Reflection";…
Descriptors: Reflection, Correlation, Essays, Freshman Composition
Peer reviewed Peer reviewed
Direct linkDirect link
Lian Li; Jiehui Hu; Yu Dai; Ping Zhou; Wanhong Zhang – Reading & Writing Quarterly, 2024
This paper proposes to use depth perception to represent raters' decision in holistic evaluation of ESL essays, as an alternative medium to conventional form of numerical scores. The researchers verified the new method's accuracy and inter/intra-rater reliability by inviting 24 ESL teachers to perform different representations when rating 60…
Descriptors: Essays, Holistic Approach, Writing Evaluation, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Ullmann, Thomas Daniel – International Journal of Artificial Intelligence in Education, 2019
Reflective writing is an important educational practice to train reflective thinking. Currently, researchers must manually analyze these writings, limiting practice and research because the analysis is time and resource consuming. This study evaluates whether machine learning can be used to automate this manual analysis. The study investigates…
Descriptors: Reflection, Writing (Composition), Writing Evaluation, Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Husain Abdulhay; Moussa Ahmadian – rEFLections, 2024
This study attempted to discern the factor structure of the achievement goal orientation and goal structure constructs across the domain-specific task of essay writing in an Iranian EFL context. A convenience sample of 116 public university learners participated in a single-session, in-class study of an essay writing sampling and an immediate…
Descriptors: Foreign Countries, Factor Structure, Goal Orientation, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Chung-You Tsai; Yi-Ti Lin; Iain Kelsall Brown – Education and Information Technologies, 2024
To determine the impacts of using ChatGPT to assist English as a foreign language (EFL) English college majors in revising essays and the possibility of leading to higher scores and potentially causing unfairness. A prospective, double-blinded, paired-comparison study was conducted in Feb. 2023. A total of 44 students provided 44 original essays…
Descriptors: Artificial Intelligence, Computer Software, Technology Uses in Education, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Song, Yi; Deane, Paul; Beigman Klebanov, Beata – ETS Research Report Series, 2017
This project focuses on laying the foundations for automated analysis of argumentation schemes, supporting identification and classification of the arguments being made in a text, for the purpose of scoring the quality of written analyses of arguments. We developed annotation protocols for 20 argument prompts from a college-level test under the…
Descriptors: Scoring, Automation, Persuasive Discourse, Documentation
Peer reviewed Peer reviewed
Direct linkDirect link
Ghanbari, Nasim; Barati, Hossein – Language Testing in Asia, 2020
The present study reports the process of development and validation of a rating scale in the Iranian EFL academic writing assessment context. To achieve this goal, the study was conducted in three distinct phases. Early in the study, the researcher interviewed a number of raters in different universities. Next, a questionnaire was developed based…
Descriptors: Rating Scales, Writing Evaluation, English for Academic Purposes, Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Divayana, Dewa Gede Hendra; Adiarta, Agus; Suyasa, P. Wayan Arta – Journal of Technology and Science Education, 2021
One of the free platforms made by IT companies in the education sector in Indonesia can be used to facilitate online learning at home during the "COVID-19" pandemic. The platform is called the "SEVIMA EdLink." This platform needs to be known by academics and the wider community of education in the world. This platform provides…
Descriptors: Foreign Countries, Educational Technology, Technology Uses in Education, School Closing
Peer reviewed Peer reviewed
Direct linkDirect link
Ke, Xiaohua; Zeng, Yongqiang; Luo, Haijiao – Journal of Educational Measurement, 2016
This article presents a novel method, the Complex Dynamics Essay Scorer (CDES), for automated essay scoring using complex network features. Texts produced by college students in China were represented as scale-free networks (e.g., a word adjacency model) from which typical network features, such as the in-/out-degrees, clustering coefficient (CC),…
Descriptors: Scoring, Automation, Essays, Networks
Peer reviewed Peer reviewed
Direct linkDirect link
He, Tung-hsien – SAGE Open, 2019
This study employed a mixed-design approach and the Many-Facet Rasch Measurement (MFRM) framework to investigate whether rater bias occurred between the onscreen scoring (OSS) mode and the paper-based scoring (PBS) mode. Nine human raters analytically marked scanned scripts and paper scripts using a six-category (i.e., six-criterion) rating…
Descriptors: Computer Assisted Testing, Scoring, Item Response Theory, Essays
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5