Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 16 |
Since 2006 (last 20 years) | 25 |
Descriptor
Source
Grantee Submission | 19 |
International Educational… | 2 |
Discourse Processes: A… | 1 |
Journal of Educational Data… | 1 |
Journal of Educational… | 1 |
Journal of Learning Analytics | 1 |
Author
Publication Type
Reports - Research | 25 |
Speeches/Meeting Papers | 16 |
Journal Articles | 7 |
Tests/Questionnaires | 3 |
Education Level
High Schools | 13 |
Higher Education | 11 |
Secondary Education | 11 |
Postsecondary Education | 8 |
Grade 10 | 1 |
Audience
Researchers | 1 |
Teachers | 1 |
Location
Arizona | 1 |
Arizona (Phoenix) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests | 8 |
Writing Apprehension Test | 2 |
What Works Clearinghouse Rating
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales
Allen, Laura K.; Mills, Caitlin; Perret, Cecile; McNamara, Danielle S. – Grantee Submission, 2019
This study examines the extent to which instructions to self-explain vs. "other"-explain a text lead readers to produce different forms of explanations. Natural language processing was used to examine the content and characteristics of the explanations produced as a function of instruction condition. Undergraduate students (n = 146)…
Descriptors: Language Processing, Science Instruction, Computational Linguistics, Teaching Methods
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2021
Text summarization is an effective reading comprehension strategy. However, summary evaluation is complex and must account for various factors including the summary and the reference text. This study examines a corpus of approximately 3,000 summaries based on 87 reference texts, with each summary being manually scored on a 4-point Likert scale.…
Descriptors: Computer Assisted Testing, Scoring, Natural Language Processing, Computer Software
McCarthy, Kathryn S.; Allen, Laura K.; Hinze, Scott R. – Grantee Submission, 2020
Open-ended "constructed responses" promote deeper processing of course materials. Further, evaluation of these explanations can yield important information about students' cognition. This study examined how students' constructed responses, generated at different points during learning, relate to their later comprehension outcomes.…
Descriptors: Reading Comprehension, Prediction, Responses, College Students
Öncel, Püren; Flynn, Lauren E.; Sonia, Allison N.; Barker, Kennis E.; Lindsay, Grace C.; McClure, Caleb M.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2021
Automated Writing Evaluation systems have been developed to help students improve their writing skills through the automated delivery of both summative and formative feedback. These systems have demonstrated strong potential in a variety of educational contexts; however, they remain limited in their personalization and scope. The purpose of the…
Descriptors: Computer Assisted Instruction, Writing Evaluation, Formative Evaluation, Summative Evaluation
Sonia, Allison N.; Joseph, Magliano P.; McCarthy, Kathryn S.; Creer, Sarah D.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2022
The constructed responses individuals generate while reading can provide insights into their coherence-building processes. The current study examined how the cohesion of constructed responses relates to performance on an integrated writing task. Participants (N = 95) completed a multiple document reading task wherein they were prompted to think…
Descriptors: Natural Language Processing, Connected Discourse, Reading Processes, Writing Skills
Allen, Laura K.; Jacovina, Matthew E.; Dascalu, Mihai; Roscoe, Rod D.; Kent, Kevin M.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2016
This study investigates how and whether information about students' writing can be recovered from basic behavioral data extracted during their sessions in an intelligent tutoring system for writing. We calculate basic and time-sensitive keystroke indices based on log files of keys pressed during students' writing sessions. A corpus of prompt-based…
Descriptors: Essays, Writing Processes, Writing (Composition), Writing Instruction
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2017
The current study examined the degree to which the quality and characteristics of students' essays could be modeled through dynamic natural language processing analyses. Undergraduate students (n = 131) wrote timed, persuasive essays in response to an argumentative writing prompt. Recurrent patterns of the words in the essays were then analyzed…
Descriptors: Writing Evaluation, Essays, Persuasive Discourse, Natural Language Processing
Allen, Laura K.; Perret, Cecile; McNamara, Danielle S. – Grantee Submission, 2016
The relationship between working memory capacity and writing ability was examined via a linguistic analysis of student essays. Undergraduate students (n = 108) wrote timed, prompt-based essays and completed a battery of cognitive assessments. The surface- and discourse-level linguistic features of students' essays were then analyzed using natural…
Descriptors: Cognitive Processes, Writing (Composition), Short Term Memory, Writing Ability
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of writing proficiency generally includes analyses of the specific linguistic and rhetorical features contained in the singular essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing might more closely capture writing skill.…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Writing Skills
Crossley, Scott A.; Allen, Laura K.; Kyle, Kristopher; McNamara, Danielle S. – Discourse Processes: A multidisciplinary journal, 2014
Natural language processing (NLP) provides a powerful approach for discourse processing researchers. However, there remains a notable degree of hesitation by some researchers to consider using NLP, at least on their own. The purpose of this article is to introduce and make available a "simple" NLP (SiNLP) tool. The overarching goal of…
Descriptors: Natural Language Processing, Discourse Analysis, Language Research, Computer Oriented Programs
Allen, Laura K.; McNamara, Danielle S. – International Educational Data Mining Society, 2015
The current study investigates the degree to which the lexical properties of students' essays can inform stealth assessments of their vocabulary knowledge. In particular, we used indices calculated with the natural language processing tool, TAALES, to predict students' performance on a measure of vocabulary knowledge. To this end, two corpora were…
Descriptors: Vocabulary, Knowledge Level, Models, Natural Language Processing
Allen, Laura K.; Jacovina, Matthew E.; Dascalu, Mihai; Roscoe, Rod D.; Kent, Kevin M.; Likens, Aaron D.; McNamara, Danielle S. – International Educational Data Mining Society, 2016
This study investigates how and whether information about students' writing can be recovered from basic behavioral data extracted during their sessions in an intelligent tutoring system for writing. We calculate basic and time-sensitive keystroke indices based on log files of keys pressed during students' writing sessions. A corpus of prompt-based…
Descriptors: Writing Processes, Intelligent Tutoring Systems, Natural Language Processing, Feedback (Response)
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of argumentative writing generally includes analyses of the specific linguistic and rhetorical features contained in the individual essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing may more accurately capture their…
Descriptors: Writing (Composition), Persuasive Discourse, Essays, Language Usage
Allen, Laura K.; Mills, Caitlin; Jacovina, Matthew E.; Crossley, Scott; D'Mello, Sidney; McNamara, Danielle S. – Grantee Submission, 2016
Writing training systems have been developed to provide students with instruction and deliberate practice on their writing. Although generally successful in providing accurate scores, a common criticism of these systems is their lack of personalization and adaptive instruction. In particular, these systems tend to place the strongest emphasis on…
Descriptors: Learner Engagement, Psychological Patterns, Writing Instruction, Essays
Previous Page | Next Page »
Pages: 1 | 2