NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Grantee Submission78
Publication Type
Speeches/Meeting Papers78
Reports - Research65
Reports - Descriptive8
Reports - Evaluative5
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 78 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Peer reviewed Peer reviewed
Yang Zhong; Mohamed Elaraby; Diane Litman; Ahmed Ashraf Butt; Muhsin Menekse – Grantee Submission, 2024
This paper introduces REFLECTSUMM, a novel summarization dataset specifically designed for summarizing students' reflective writing. The goal of REFLECTSUMM is to facilitate developing and evaluating novel summarization techniques tailored to real-world scenarios with little training data, with potential implications in the opinion summarization…
Descriptors: Documentation, Writing (Composition), Reflection, Metadata
Olney, Andrew M. – Grantee Submission, 2022
Multi-angle question answering models have recently been proposed that promise to perform related tasks like question generation. However, performance on related tasks has not been thoroughly studied. We investigate a leading model called Macaw on the task of multiple choice question generation and evaluate its performance on three angles that…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jianing Zhou; Ziheng Zeng; Hongyu Gong; Suma Bhat – Grantee Submission, 2022
Idiomatic expressions (IEs) play an essential role in natural language. In this paper, we study the task of idiomatic sentence paraphrasing (ISP), which aims to paraphrase a sentence with an IE by replacing the IE with its literal paraphrase. The lack of large scale corpora with idiomatic-literal parallel sentences is a primary challenge for this…
Descriptors: Language Patterns, Sentences, Language Processing, Phrase Structure
Dragos Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2023
Reading comprehension is essential for both knowledge acquisition and memory reinforcement. Automated modeling of the comprehension process provides insights into the efficacy of specific texts as learning tools. This paper introduces an improved version of the Automated Model of Comprehension, version 3.0 (AMoC v3.0). AMoC v3.0 is based on two…
Descriptors: Reading Comprehension, Models, Concept Mapping, Graphs
Peer reviewed Peer reviewed
Direct linkDirect link
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Maria Goldshtein; Jaclyn Ocumpaugh; Andrew Potter; Rod D. Roscoe – Grantee Submission, 2024
As language technologies have become more sophisticated and prevalent, there have been increasing concerns about bias in natural language processing (NLP). Such work often focuses on the effects of bias instead of sources. In contrast, this paper discusses how normative language assumptions and ideologies influence a range of automated language…
Descriptors: Language Attitudes, Computational Linguistics, Computer Software, Natural Language Processing
Peer reviewed Peer reviewed
Priti Oli; Rabin Banjade; Jeevan Chapagain; Vasile Rus – Grantee Submission, 2023
This paper systematically explores how Large Language Models (LLMs) generate explanations of code examples of the type used in intro-to-programming courses. As we show, the nature of code explanations generated by LLMs varies considerably based on the wording of the prompt, the target code examples being explained, the programming language, the…
Descriptors: Computational Linguistics, Programming, Computer Science Education, Programming Languages
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Andrew M. Olney – Grantee Submission, 2023
Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Algorithms
Peer reviewed Peer reviewed
Ha Tien Nguyen; Conrad Borchers; Meng Xia; Vincent Aleven – Grantee Submission, 2024
Intelligent tutoring systems (ITS) can help students learn successfully, yet little work has explored the role of caregivers in shaping that success. Past interventions to support caregivers in supporting their child's homework have been largely disjunct from educational technology. The paper presents prototyping design research with nine middle…
Descriptors: Middle School Mathematics, Intelligent Tutoring Systems, Caregivers, Caregiver Attitudes
Bogdan Nicula; Marilena Panaite; Tracy Arner; Renu Balyan; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2023
Self-explanation practice is an effective method to support students in better understanding complex texts. This study focuses on automatically assessing the comprehension strategies employed by readers while understanding STEM texts. Data from 3 datasets (N = 11,833) with self-explanations annotated on different comprehension strategies (i.e.,…
Descriptors: Reading Strategies, Reading Comprehension, Metacognition, STEM Education
Peer reviewed Peer reviewed
Arun-Balajiee Lekshmi-Narayanan; Priti Oli; Jeevan Chapagain; Mohammad Hassany; Rabin Banjade; Vasile Rus – Grantee Submission, 2024
Worked examples, which present an explained code for solving typical programming problems are among the most popular types of learning content in programming classes. Most approaches and tools for presenting these examples to students are based on line-by-line explanations of the example code. However, instructors rarely have time to provide…
Descriptors: Coding, Computer Science Education, Computational Linguistics, Artificial Intelligence
Botarleanu, Robert-Mihai; Dascalu, Mihai; Watanabe, Micah; McNamara, Danielle S.; Crossley, Scott Andrew – Grantee Submission, 2021
The ability to objectively quantify the complexity of a text can be a useful indicator of how likely learners of a given level will comprehend it. Before creating more complex models of assessing text difficulty, the basic building block of a text consists of words and, inherently, its overall difficulty is greatly influenced by the complexity of…
Descriptors: Multilingualism, Language Acquisition, Age, Models
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura Kristen; Magliano, Joseph P.; McCarthy, Kathryn S.; Sonia, Allison N.; Creer, Sarah D.; McNamara, Danielle S. – Grantee Submission, 2021
The current study examined the extent to which the cohesion detected in readers' constructed responses to multiple documents was predictive of persuasive, source-based essay quality. Participants (N=95) completed multiple-documents reading tasks wherein they were prompted to think-aloud, self-explain, or evaluate the sources while reading a set of…
Descriptors: Reading Comprehension, Connected Discourse, Reader Response, Natural Language Processing
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6