NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
ERIC Number: ED624499
Record Type: Non-Journal
Publication Date: 2022
Pages: 15
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-2666-5573
EISSN: N/A
Available Date: N/A
Building a Validity Argument for an Automated Writing Evaluation System (eRevise) as a Formative Assessment
Richard Correnti; Lindsay Clare Matsumura; Elaine Lin Wang; Diane Litman; Haoran Zhang
Grantee Submission, Computers and Education Open v3 Article 100084 2022
Recent reviews of automated writing evaluation systems indicate lack of uniformity in the purpose, design, and assessment of such systems. Our work lies at the nexus of critical themes arising from these reviews. We describe our work on eRevise, an automated writing evaluation system focused on elementary students' text-based evidence-use. eRevise is organized around a construct (i.e., evidence-use) central to argument writing, thus responding to calls for automated evaluations to assess substantive aspects of writing. Both scoring and feedback provided to students are based on features of evidence-use aligned with the rubric used in human scoring. Feedback is both data-driven (i.e., responsive to the draft essay) and based on expert guidance. Design features of the system promote formative assessment of writing aligned with instruction, thereby addressing calls for automated feedback systems to reflect best practices aligned with sociocultural learning theory. We examine a validity argument, including our assumptions and warrants, for the potential of eRevise to serve a formative assessment purpose. We provide evidence demonstrating improvement in students' writing from the 1st to 2nd draft aligned with the feedback students received. Students report understanding the feedback along with the utility of the feedback, and they also report improvement in their essays. Teachers report they would use eRevise again, and most saw it as aligned with their teaching. A multi-level model provides evidence that substantive teacher input to students' questions about their feedback was associated with higher improvement scores. We discuss implications for interactive effects between teaching and automated feedback systems.
Publication Type: Journal Articles; Reports - Research
Education Level: Elementary Education; Grade 5; Intermediate Grades; Middle Schools; Grade 6
Audience: N/A
Language: English
Sponsor: Institute of Education Sciences (ED)
Authoring Institution: N/A
Identifiers - Location: Louisiana
IES Funded: Yes
Grant or Contract Numbers: R305A160245
Author Affiliations: N/A