Publication Date
In 2025 | 0 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 42 |
Since 2016 (last 10 years) | 78 |
Since 2006 (last 20 years) | 94 |
Descriptor
Source
Grantee Submission | 94 |
Author
McNamara, Danielle S. | 22 |
Allen, Laura K. | 15 |
Burstein, Jill | 9 |
Crossley, Scott A. | 9 |
Litman, Diane | 9 |
Roscoe, Rod D. | 6 |
Zhang, Haoran | 6 |
Beigman Klebanov, Beata | 5 |
Al Otaiba, Stephanie | 4 |
Allen, Laura | 4 |
Correnti, Richard | 4 |
More ▼ |
Publication Type
Reports - Research | 85 |
Speeches/Meeting Papers | 31 |
Journal Articles | 27 |
Tests/Questionnaires | 7 |
Reports - Descriptive | 5 |
Reports - Evaluative | 4 |
Information Analyses | 3 |
Education Level
Audience
Researchers | 1 |
Teachers | 1 |
Location
California | 3 |
Louisiana | 3 |
Texas | 3 |
Arizona (Phoenix) | 2 |
Pennsylvania | 2 |
Wisconsin | 2 |
Alabama | 1 |
California (Long Beach) | 1 |
Florida | 1 |
Idaho | 1 |
Illinois | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 1 |
Meets WWC Standards with or without Reservations | 1 |
Does not meet standards | 1 |

Benjamin Motz; Harmony Jankowski; Jennifer Lopatin; Waverly Tseng; Tamara Tate – Grantee Submission, 2024
Platform-enabled research services will control, manage, and measure learner experiences within that platform. In this paper, we consider the need for research services that examine learner experiences "outside" the platform. For example, we describe an effort to conduct an experiment on peer assessment in a college writing course, where…
Descriptors: Educational Technology, Learning Management Systems, Electronic Learning, Peer Evaluation
Deborah K. Reed; Kelly Binning; Emily A. Jemison; Nicole DeSalle – Grantee Submission, 2022
Increased expectations for writing performance have created a need for formative writing assessments that will help middle school teachers better understand adolescents' grade appropriate writing skills and monitor the progress of students with or at risk for writing disabilities. In this practice piece, we first explain research-based…
Descriptors: Formative Evaluation, Writing Evaluation, Middle School Students, Prompting
Oddis, Kyle; Burstein, Jill; McCaffrey, Daniel F.; Holtzman, Steven L. – Grantee Submission, 2022
Background: Researchers interested in quantitative measures of student "success" in writing cannot control completely for contextual factors which are local and site-based (i.e., in context of a specific instructor's writing classroom at a specific institution). (In)ability to control for curriculum in studies of student writing…
Descriptors: Writing Instruction, Writing Achievement, Curriculum Evaluation, College Instruction
Danielle S. McNamara; Panayiota Kendeou – Grantee Submission, 2022
We propose a framework designed to guide the development of automated writing practice and formative evaluation and feedback for young children (K-5 th grade) -- the early Automated Writing Evaluation (early-AWE) Framework. e-AWE is grounded on the fundamental assumption that e-AWE is needed for young developing readers, but must incorporate…
Descriptors: Writing Evaluation, Automation, Formative Evaluation, Feedback (Response)
Öncel, Püren; Flynn, Lauren E.; Sonia, Allison N.; Barker, Kennis E.; Lindsay, Grace C.; McClure, Caleb M.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2021
Automated Writing Evaluation systems have been developed to help students improve their writing skills through the automated delivery of both summative and formative feedback. These systems have demonstrated strong potential in a variety of educational contexts; however, they remain limited in their personalization and scope. The purpose of the…
Descriptors: Computer Assisted Instruction, Writing Evaluation, Formative Evaluation, Summative Evaluation
Tong Li; Sarah D. Creer; Tracy Arner; Rod D. Roscoe; Laura K. Allen; Danielle S. McNamara – Grantee Submission, 2022
Automated writing evaluation (AWE) tools can facilitate teachers' analysis of and feedback on students' writing. However, increasing evidence indicates that writing instructors experience challenges in implementing AWE tools successfully. For this reason, our development of the Writing Analytics Tool (WAT) has employed a participatory approach…
Descriptors: Automation, Writing Evaluation, Learning Analytics, Participatory Research
Wesley Morris; Scott Crossley; Langdon Holmes; Chaohua Ou; Danielle McNamara; Mihai Dascalu – Grantee Submission, 2023
As intelligent textbooks become more ubiquitous in classrooms and educational settings, the need arises to automatically provide formative feedback to written responses provided by students in response to readings. This study develops models to automatically provide feedback to student summaries written at the end of intelligent textbook sections.…
Descriptors: Textbooks, Electronic Publishing, Feedback (Response), Formative Evaluation

Yang Zhong; Mohamed Elaraby; Diane Litman; Ahmed Ashraf Butt; Muhsin Menekse – Grantee Submission, 2024
This paper introduces REFLECTSUMM, a novel summarization dataset specifically designed for summarizing students' reflective writing. The goal of REFLECTSUMM is to facilitate developing and evaluating novel summarization techniques tailored to real-world scenarios with little training data, with potential implications in the opinion summarization…
Descriptors: Documentation, Writing (Composition), Reflection, Metadata
Zhang, Haoran; Litman, Diane – Grantee Submission, 2020
While automated essay scoring (AES) can reliably grade essays at scale, automated writing evaluation (AWE) additionally provides formative feedback to guide essay revision. However, a neural AES typically does not provide useful feature representations for supporting AWE. This paper presents a method for linking AWE and neural AES, by extracting…
Descriptors: Computer Assisted Testing, Scoring, Essay Tests, Writing Evaluation
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2022
Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and…
Descriptors: Writing Evaluation, Writing Tests, Predictive Validity, Formative Evaluation
Richard Correnti; Lindsay Clare Matsumura; Elaine Lin Wang; Diane Litman; Haoran Zhang – Grantee Submission, 2022
Recent reviews of automated writing evaluation systems indicate lack of uniformity in the purpose, design, and assessment of such systems. Our work lies at the nexus of critical themes arising from these reviews. We describe our work on eRevise, an automated writing evaluation system focused on elementary students' text-based evidence-use. eRevise…
Descriptors: Writing Evaluation, Feedback (Response), Automation, Elementary School Students
Ross C. Anderson; Erin A. Chaparro; Keith Smolkowski; Rachel Cameron – Grantee Submission, 2023
Though argumentative writing is a vital skill across diverse content areas and domains, most U.S. students perform below grade level in writing, and teachers are often unprepared to address this shortfall because their training approaches writing as a subspecialty of reading rather than its own unique discipline. Writing instruction and assessment…
Descriptors: Persuasive Discourse, Formative Evaluation, Writing Evaluation, Scoring Rubrics
Hazelton, Lynette; Nastal, Jessica; Elliot, Norbert; Burstein, Jill; McCaffrey, Daniel F. – Grantee Submission, 2021
In writing studies research, automated writing evaluation technology is typically examined for a specific, often narrow purpose: to evaluate a particular writing improvement measure, to mine data for changes in writing performance, or to demonstrate the effectiveness of a single technology and accompanying validity arguments. This article adopts a…
Descriptors: Formative Evaluation, Writing Evaluation, Automation, Natural Language Processing
Park, Christina; Arshan, Nicole; Milby, Allison; Goetz, Rebecca – Grantee Submission, 2021
Developed by the National Writing Project (NWP), the College, Career, and Community Writers Program (C3WP) seeks to improve students' argument writing by building teachers' understanding of and skill in teaching source-based argument writing. The program features intensive professional development, skill-based instructional resources, and…
Descriptors: Writing Instruction, Persuasive Discourse, Program Evaluation, Instructional Effectiveness
MacArthur, Charles A.; Traga Philippakos, Zoi A.; May, Henry; Compello, Jill – Grantee Submission, 2021
The paper presents the results of a randomized experimental study of a writing curriculum for college developmental writing courses based on strategy instruction with self-regulation integrated with practices common in college composition. Students in a full semester course learned strategies for planning and revising based on rhetorical analysis…
Descriptors: Learning Strategies, Metacognition, Writing Instruction, Self Efficacy