NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Every Student Succeeds Act…2
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Latifi, Syed; Gierl, Mark – Language Testing, 2021
An automated essay scoring (AES) program is a software system that uses techniques from corpus and computational linguistics and machine learning to grade essays. In this study, we aimed to describe and evaluate particular language features of Coh-Metrix for a novel AES program that would score junior and senior high school students' essays from…
Descriptors: Writing Evaluation, Computer Assisted Testing, Scoring, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Naqiyah, Mardhiyyatin; Rosana, Dadan; Sukardiyono; Ernasari – International Journal of Instruction, 2020
This research that aimed to (1) produce instruments that were feasible to measure the ability to solve physics problems and nationalism, and (2) determine the quality of instruments that have been developed. This research was conducted through four stages, namely the design, preparation of tests, test trials, and preparation of valid instruments.…
Descriptors: Nationalism, High School Students, Physics, Science Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Guo, Xiuyan; Lei, Pui-Wa – International Journal of Testing, 2020
Little research has been done on the effects of peer raters' quality characteristics on peer rating qualities. This study aims to address this gap and investigate the effects of key variables related to peer raters' qualities, including content knowledge, previous rating experience, training on rating tasks, and rating motivation. In an experiment…
Descriptors: Peer Evaluation, Error Patterns, Correlation, Knowledge Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rupp, André A.; Casabianca, Jodi M.; Krüger, Maleika; Keller, Stefan; Köller, Olaf – ETS Research Report Series, 2019
In this research report, we describe the design and empirical findings for a large-scale study of essay writing ability with approximately 2,500 high school students in Germany and Switzerland on the basis of 2 tasks with 2 associated prompts, each from a standardized writing assessment whose scoring involved both human and automated components.…
Descriptors: Automation, Foreign Countries, English (Second Language), Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Fan; Litman, Diane – Grantee Submission, 2015
This paper explores the annotation and classification of students' revision behaviors in argumentative writing. A sentence-level revision schema is proposed to capture why and how students make revisions. Based on the proposed schema, a small corpus of student essays and revisions was annotated. Studies show that manual annotation is reliable with…
Descriptors: Notetaking, Classification, Persuasive Discourse, Revision (Written Composition)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura K.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2015
We investigated linguistic factors that relate to misalignment between students' and teachers' ratings of essay quality. Students (n = 126) wrote essays and rated the quality of their work. Teachers then provided their own ratings of the essays. Results revealed that students who were less accurate in their self-assessments produced essays that…
Descriptors: Essays, Scores, Natural Language Processing, Interrater Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Darling-Hammond, Linda – Learning Policy Institute, 2017
After passage of the Every Student Succeeds Act (ESSA) in 2015, states assumed greater responsibility for designing their own accountability and assessment systems. ESSA requires states to measure "higher order thinking skills and understanding" and encourages the use of open-ended performance assessments, which are essential for…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
Darling-Hammond, Linda – Council of Chief State School Officers, 2017
The Every Student Succeeds Act (ESSA) opened up new possibilities for how student and school success are defined and supported in American public education. States have greater responsibility for designing and building their assessment and accountability systems. These new opportunities to develop performance assessments are critically important…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Tsai, Min-hsiu – Action in Teacher Education, 2012
This study investigates the consistency between human raters and an automated essay scoring system in grading high school students' English compositions. A total of 923 essays from 23 classes of 12 senior high schools in Taiwan (Republic of China) were obtained and scored manually and electronically. The results show that the consistency between…
Descriptors: Foreign Countries, High School Students, Writing (Composition), Essays
Mlodinow, Leonard – Chronicle of Higher Education, 2008
In this article, the author talks about the release of the most comprehensive study of SAT exams. The headline on the Web site of the College Board, the maker of the test, was, "SAT Studies Show Test's Strength in Predicting College Success." At the same time, a headline on the Web site of the group FairTest, a 23-year-old, nonprofit…
Descriptors: Writing Tests, Academic Achievement, Grading, Standardized Tests
Kobrin, Jennifer L.; Kimmel, Ernest W. – College Board, 2006
Based on statistics from the first few administrations of the SAT writing section, the test is performing as expected. The reliability of the writing section is very similar to that of other writing assessments. Based on preliminary validity research, the writing section is expected to add modestly to the prediction of college performance when…
Descriptors: Test Construction, Writing Tests, Cognitive Tests, College Entrance Examinations
Breland, Hunter; Kubota, Melvin; Nickerson, Kristine; Trapani, Catherine; Walker, Michael – College Entrance Examination Board, 2004
This study investigated the impact on ethnic, language, and gender groups of a new kind of essay prompt type intended for use with the new SAT®. The study also generated estimates of the reliability of scores obtained using the prompts examined. To examine the impact of a new prompt type, random samples of eleventh-grade students in 49…
Descriptors: College Entrance Examinations, Standardized Tests, Ethnic Groups, Language Usage