NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Speeches/Meeting Papers16
Reports - Research15
Reports - Descriptive1
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura Kristen; Magliano, Joseph P.; McCarthy, Kathryn S.; Sonia, Allison N.; Creer, Sarah D.; McNamara, Danielle S. – Grantee Submission, 2021
The current study examined the extent to which the cohesion detected in readers' constructed responses to multiple documents was predictive of persuasive, source-based essay quality. Participants (N=95) completed multiple-documents reading tasks wherein they were prompted to think-aloud, self-explain, or evaluate the sources while reading a set of…
Descriptors: Reading Comprehension, Connected Discourse, Reader Response, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pugh, Samuel L.; Subburaj, Shree Krishna; Rao, Arjun Ramesh; Stewart, Angela E. B.; Andrews-Todd, Jessica; D'Mello, Sidney K. – International Educational Data Mining Society, 2021
We investigated the feasibility of using automatic speech recognition (ASR) and natural language processing (NLP) to classify collaborative problem solving (CPS) skills from recorded speech in noisy environments. We analyzed data from 44 dyads of middle and high school students who used videoconferencing to collaboratively solve physics and math…
Descriptors: Problem Solving, Cooperation, Middle School Students, High School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crossley, Scott; Kyle, Kristopher; Davenport, Jodi; McNamara, Danielle S. – International Educational Data Mining Society, 2016
This study introduces the Constructed Response Analysis Tool (CRAT), a freely available tool to automatically assess student responses in online tutoring systems. The study tests CRAT on a dataset of chemistry responses collected in the ChemVLab+. The findings indicate that CRAT can differentiate and classify student responses based on semantic…
Descriptors: Intelligent Tutoring Systems, Chemistry, Natural Language Processing, High School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Albacete, Patricia; Jordan, Pamela; Katz, Sandra; Chounta, Irene-Angelica; McLaren, Bruce M. – Grantee Submission, 2019
This paper describes an initial pilot study of Rimac, a natural-language tutoring system for physics. Rimac uses a student model to guide decisions about "what content to discuss next" during reflective dialogues that are initiated after students solve quantitative physics problems, and "how much support to provide" during…
Descriptors: Natural Language Processing, Teaching Methods, Educational Technology, Technology Uses in Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jordan, Pamela; Albacete, Patricia; Katz, Sandra – Grantee Submission, 2016
We explore the effectiveness of a simple algorithm for adaptively deciding whether to further decompose a step in a line of reasoning during tutorial dialogue. We compare two versions of a tutorial dialogue system, Rimac: one that always decomposes a step to its simplest sub-steps and one that adaptively decides to decompose a step based on a…
Descriptors: Algorithms, Decision Making, Intelligent Tutoring Systems, Scaffolding (Teaching Technique)
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of writing proficiency generally includes analyses of the specific linguistic and rhetorical features contained in the singular essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing might more closely capture writing skill.…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Writing Skills
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Katz, Sandra; Albacete, Patricia; Jordan, Pamela – Grantee Submission, 2016
This poster reports on a study that compared three types of summaries at the end of natural-language tutorial dialogues and a no-dialogue control, to determine which type of summary, if any, best predicted learning gains. Although we found no significant differences between conditions, analyses of gender differences indicate that female students…
Descriptors: Natural Language Processing, Intelligent Tutoring Systems, Reflection, Dialogs (Language)
Allen, Laura K.; McNamara, Danielle S. – International Educational Data Mining Society, 2015
The current study investigates the degree to which the lexical properties of students' essays can inform stealth assessments of their vocabulary knowledge. In particular, we used indices calculated with the natural language processing tool, TAALES, to predict students' performance on a measure of vocabulary knowledge. To this end, two corpora were…
Descriptors: Vocabulary, Knowledge Level, Models, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Michalenko, Joshua J.; Lan, Andrew S.; Waters, Andrew E.; Grimaldi, Philip J.; Baraniuk, Richard G. – International Educational Data Mining Society, 2017
An important, yet largely unstudied problem in student data analysis is to detect "misconceptions" from students' responses to "open-response" questions. Misconception detection enables instructors to deliver more targeted feedback on the misconceptions exhibited by many students in their class, thus improving the quality of…
Descriptors: Data Analysis, Misconceptions, Student Attitudes, Feedback (Response)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crossley, Scott; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Grantee Submission, 2015
This study investigates a new approach to automatically assessing essay quality that combines traditional approaches based on assessing textual features with new approaches that measure student attributes such as demographic information, standardized test scores, and survey results. The results demonstrate that combining both text features and…
Descriptors: Automation, Scoring, Essays, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura K.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2015
We investigated linguistic factors that relate to misalignment between students' and teachers' ratings of essay quality. Students (n = 126) wrote essays and rated the quality of their work. Teachers then provided their own ratings of the essays. Results revealed that students who were less accurate in their self-assessments produced essays that…
Descriptors: Essays, Scores, Natural Language Processing, Interrater Reliability
Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Grantee Submission, 2015
This study builds upon previous work aimed at developing a student model of reading comprehension ability within the intelligent tutoring system, iSTART. Currently, the system evaluates students' self-explanation performance using a local, sentence-level algorithm and does not adapt content based on reading ability. The current study leverages…
Descriptors: Reading Comprehension, Reading Skills, Natural Language Processing, Intelligent Tutoring Systems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Grantee Submission, 2014
In the current study, we utilize natural language processing techniques to examine relations between the linguistic properties of students' self-explanations and their reading comprehension skills. Linguistic features of students' aggregated self-explanations were analyzed using the Linguistic Inquiry and Word Count (LIWC) software. Results…
Descriptors: Natural Language Processing, Reading Comprehension, Linguistics, Predictor Variables
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Varner, Laura K.; Jackson, G. Tanner; Snow, Erica L.; McNamara, Danielle S. – Grantee Submission, 2013
This study expands upon an existing model of students' reading comprehension ability within an intelligent tutoring system. The current system evaluates students' natural language input using a local student model. We examine the potential to expand this model by assessing the linguistic features of self-explanations aggregated across entire…
Descriptors: Reading Comprehension, Intelligent Tutoring Systems, Natural Language Processing, Reading Ability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crossley, Scott A.; Varner, Laura K.; Roscoe, Rod D.; McNamara, Danielle S. – Grantee Submission, 2013
We present an evaluation of the Writing Pal (W-Pal) intelligent tutoring system (ITS) and the W-Pal automated writing evaluation (AWE) system through the use of computational indices related to text cohesion. Sixty-four students participated in this study. Each student was assigned to either the W-Pal ITS condition or the W-Pal AWE condition. The…
Descriptors: Intelligent Tutoring Systems, Automation, Writing Evaluation, Writing Assignments
Previous Page | Next Page »
Pages: 1  |  2