NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Grantee Submission27
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 27 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura Kristen; Magliano, Joseph P.; McCarthy, Kathryn S.; Sonia, Allison N.; Creer, Sarah D.; McNamara, Danielle S. – Grantee Submission, 2021
The current study examined the extent to which the cohesion detected in readers' constructed responses to multiple documents was predictive of persuasive, source-based essay quality. Participants (N=95) completed multiple-documents reading tasks wherein they were prompted to think-aloud, self-explain, or evaluate the sources while reading a set of…
Descriptors: Reading Comprehension, Connected Discourse, Reader Response, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wan, Qian; Crossley, Scott; Allen, Laura; McNamara, Danielle – Grantee Submission, 2020
In this paper, we extracted content-based and structure-based features of text to predict human annotations for claims and nonclaims in argumentative essays. We compared Logistic Regression, Bernoulli Naive Bayes, Gaussian Naive Bayes, Linear Support Vector Classification, Random Forest, and Neural Networks to train classification models. Random…
Descriptors: Persuasive Discourse, Essays, Writing Evaluation, Natural Language Processing
Öncel, Püren; Flynn, Lauren E.; Sonia, Allison N.; Barker, Kennis E.; Lindsay, Grace C.; McClure, Caleb M.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2021
Automated Writing Evaluation systems have been developed to help students improve their writing skills through the automated delivery of both summative and formative feedback. These systems have demonstrated strong potential in a variety of educational contexts; however, they remain limited in their personalization and scope. The purpose of the…
Descriptors: Computer Assisted Instruction, Writing Evaluation, Formative Evaluation, Summative Evaluation
Sonia, Allison N.; Joseph, Magliano P.; McCarthy, Kathryn S.; Creer, Sarah D.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2022
The constructed responses individuals generate while reading can provide insights into their coherence-building processes. The current study examined how the cohesion of constructed responses relates to performance on an integrated writing task. Participants (N = 95) completed a multiple document reading task wherein they were prompted to think…
Descriptors: Natural Language Processing, Connected Discourse, Reading Processes, Writing Skills
Crossley, Scott; Wan, Qian; Allen, Laura; McNamara, Danielle – Grantee Submission, 2021
Synthesis writing is widely taught across domains and serves as an important means of assessing writing ability, text comprehension, and content learning. Synthesis writing differs from other types of writing in terms of both cognitive and task demands because it requires writers to integrate information across source materials. However, little is…
Descriptors: Writing Skills, Cognitive Processes, Essays, Cues
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2017
This study examined how machine learning and natural language processing (NLP) techniques can be leveraged to assess the interpretive behavior that is required for successful literary text comprehension. We compared the accuracy of seven different machine learning classification algorithms in predicting human ratings of student essays about…
Descriptors: Artificial Intelligence, Natural Language Processing, Reading Comprehension, Literature
MacArthur, Charles A.; Jennings, Amanda; Philippakos, Zoi A. – Grantee Submission, 2018
The study developed a model of linguistic constructs to predict writing quality for college basic writers and analyzed how those constructs changed following instruction. Analysis used a corpus of argumentative essays from a quasi-experimental, instructional study with 252 students (MacArthur, Philippakos, & Ianetta, 2015) that found large…
Descriptors: College Students, Writing Skills, Writing Evaluation, Writing Achievement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, H.; Magooda, A.; Litman, D.; Correnti, R.; Wang, E.; Matsumura, L. C.; Howe, E.; Quintana, R. – Grantee Submission, 2019
Writing a good essay typically involves students revising an initial paper draft after receiving feedback. We present eRevise, a web-based writing and revising environment that uses natural language processing features generated for rubric-based essay scoring to trigger formative feedback messages regarding students' use of evidence in…
Descriptors: Formative Evaluation, Essays, Writing (Composition), Revision (Written Composition)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura K.; Jacovina, Matthew E.; Dascalu, Mihai; Roscoe, Rod D.; Kent, Kevin M.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2016
This study investigates how and whether information about students' writing can be recovered from basic behavioral data extracted during their sessions in an intelligent tutoring system for writing. We calculate basic and time-sensitive keystroke indices based on log files of keys pressed during students' writing sessions. A corpus of prompt-based…
Descriptors: Essays, Writing Processes, Writing (Composition), Writing Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2017
The current study examined the degree to which the quality and characteristics of students' essays could be modeled through dynamic natural language processing analyses. Undergraduate students (n = 131) wrote timed, persuasive essays in response to an argumentative writing prompt. Recurrent patterns of the words in the essays were then analyzed…
Descriptors: Writing Evaluation, Essays, Persuasive Discourse, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Beigman Klebanov, Beata; Priniski, Stacy; Burstein, Jill; Gyawali, Binod; Harackiewicz, Judith; Thoman, Dustin – Grantee Submission, 2018
Collection and analysis of students' writing samples on a large scale is a part of the research agenda of the emerging writing analytics community that promises to deliver an unprecedented insight into characteristics of student writing. Yet with a large scale often comes variability of contexts in which the samples were produced--different…
Descriptors: Learning Analytics, Context Effect, Automation, Generalization
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura K.; Perret, Cecile; McNamara, Danielle S. – Grantee Submission, 2016
The relationship between working memory capacity and writing ability was examined via a linguistic analysis of student essays. Undergraduate students (n = 108) wrote timed, prompt-based essays and completed a battery of cognitive assessments. The surface- and discourse-level linguistic features of students' essays were then analyzed using natural…
Descriptors: Cognitive Processes, Writing (Composition), Short Term Memory, Writing Ability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Johnson, Amy M.; McCarthy, Kathryn S.; Kopp, Kristopher J.; Perret, Cecile A.; McNamara, Danielle S. – Grantee Submission, 2017
Intelligent tutoring systems for ill-defined domains, such as reading and writing, are critically needed, yet uncommon. Two such systems, the Interactive Strategy Training for Active Reading and Thinking (iSTART) and Writing Pal (W-Pal) use natural language processing (NLP) to assess learners' written (i.e., typed) responses and provide immediate,…
Descriptors: Reading Instruction, Writing Instruction, Intelligent Tutoring Systems, Reading Strategies
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of writing proficiency generally includes analyses of the specific linguistic and rhetorical features contained in the singular essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing might more closely capture writing skill.…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Writing Skills
Previous Page | Next Page »
Pages: 1  |  2