NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Elif Sari – International Journal of Assessment Tools in Education, 2024
Employing G-theory and rater interviews, the study investigated how a high-stakes writing assessment procedure (i.e., a single-task, single-rater, and holistic scoring procedure) impacted the variability and reliability of its scores within the Turkish higher education context. Thirty-two essays written on two different writing tasks (i.e.,…
Descriptors: Foreign Countries, High Stakes Tests, Writing Evaluation, Scores
Gary A. Troia; Frank R. Lawrence; Julie S. Brehmer; Kaitlin Glause; Heather L. Reichmuth – Grantee Submission, 2023
Much of the research that has examined the writing knowledge of school-age students has relied on interviews to ascertain this information, which is problematic because interviews may underestimate breadth and depth of writing knowledge, require lengthy interactions with participants, and do not permit a direct evaluation of a prescribed array of…
Descriptors: Writing Tests, Writing Evaluation, Knowledge Level, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Xiuyuan – AERA Online Paper Repository, 2019
The main purpose of the study is to evaluate the qualities of human essay ratings for a large-scale assessment using Rasch measurement theory. Specifically, Many-Facet Rasch Measurement (MFRM) was utilized to examine the rating scale category structure and provide important information about interpretations of ratings in the large-scale…
Descriptors: Essays, Evaluators, Writing Evaluation, Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rupp, André A.; Casabianca, Jodi M.; Krüger, Maleika; Keller, Stefan; Köller, Olaf – ETS Research Report Series, 2019
In this research report, we describe the design and empirical findings for a large-scale study of essay writing ability with approximately 2,500 high school students in Germany and Switzerland on the basis of 2 tasks with 2 associated prompts, each from a standardized writing assessment whose scoring involved both human and automated components.…
Descriptors: Automation, Foreign Countries, English (Second Language), Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sudweeks, Richard R.; Reeve, Suzanne; Bradshaw, William S. – Assessing Writing, 2004
A pilot study was conducted to evaluate and improve the rating procedure proposed for use in a research effort designed to assess the essay writing ability of college sophomores. Generalizability theory and the Many-Facet Rasch Model were each used to (a) estimate potential sources of error in the rating, (b) to obtain reliability estimates, and…
Descriptors: Generalizability Theory, College Students, Writing Ability, Writing Evaluation
Godshalk, Fred I.; And Others – 1966
This study investigated the validity of various approaches to the measurement of English composition skills. Over 600 11th and 12th-grade students were asked to write five 20-minute essays on different topics, to take six objective tests of writing ability, and to do two interlinear exercises. Twenty-five experienced readers assigned scores of…
Descriptors: College Entrance Examinations, Educational Testing, English Instruction, Essay Tests