Publication Date
In 2025 | 0 |
Since 2024 | 10 |
Since 2021 (last 5 years) | 64 |
Since 2016 (last 10 years) | 141 |
Since 2006 (last 20 years) | 161 |
Descriptor
Source
Grantee Submission | 161 |
Author
Publication Type
Reports - Research | 131 |
Speeches/Meeting Papers | 78 |
Journal Articles | 39 |
Reports - Descriptive | 19 |
Reports - Evaluative | 11 |
Tests/Questionnaires | 4 |
Information Analyses | 2 |
Opinion Papers | 1 |
Education Level
Audience
Researchers | 1 |
Teachers | 1 |
Location
Pennsylvania | 4 |
California | 3 |
Canada | 3 |
United States | 3 |
Arizona (Phoenix) | 2 |
Illinois | 2 |
Pennsylvania (Pittsburgh) | 2 |
Africa | 1 |
Arizona | 1 |
California (Long Beach) | 1 |
Florida | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing

Yang Zhong; Mohamed Elaraby; Diane Litman; Ahmed Ashraf Butt; Muhsin Menekse – Grantee Submission, 2024
This paper introduces REFLECTSUMM, a novel summarization dataset specifically designed for summarizing students' reflective writing. The goal of REFLECTSUMM is to facilitate developing and evaluating novel summarization techniques tailored to real-world scenarios with little training data, with potential implications in the opinion summarization…
Descriptors: Documentation, Writing (Composition), Reflection, Metadata
Laura K. Allen; Sarah C. Creer; Püren Öncel – Grantee Submission, 2022
As educators turn to technology to supplement classroom instruction, the integration of natural language processing (NLP) into educational technologies is vital for increasing student success. NLP involves the use of computers to analyze and respond to human language, including students' responses to a variety of assignments and tasks. While NLP…
Descriptors: Natural Language Processing, Learning Analytics, Learning Processes, Methods
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Olney, Andrew M. – Grantee Submission, 2022
Multi-angle question answering models have recently been proposed that promise to perform related tasks like question generation. However, performance on related tasks has not been thoroughly studied. We investigate a leading model called Macaw on the task of multiple choice question generation and evaluate its performance on three angles that…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Models
Jianing Zhou; Ziheng Zeng; Hongyu Gong; Suma Bhat – Grantee Submission, 2022
Idiomatic expressions (IEs) play an essential role in natural language. In this paper, we study the task of idiomatic sentence paraphrasing (ISP), which aims to paraphrase a sentence with an IE by replacing the IE with its literal paraphrase. The lack of large scale corpora with idiomatic-literal parallel sentences is a primary challenge for this…
Descriptors: Language Patterns, Sentences, Language Processing, Phrase Structure
Matthew T. McCrudden; Linh Huynh; Bailing Lyu; Jonna M. Kulikowich; Danielle S. McNamara – Grantee Submission, 2024
Readers build a mental representation of text during reading. The coherence building processes readers use to build a mental representation during reading is key to comprehension. We examined the effects of self- explanation on coherence building processes as undergraduates (n =51) read five complementary texts about natural selection and…
Descriptors: Reading Processes, Reading Comprehension, Undergraduate Students, Evolution
Ying Fang; Rod D. Roscoe; Danielle S. McNamara – Grantee Submission, 2023
Artificial Intelligence (AI) based assessments are commonly used in a variety of settings including business, healthcare, policing, manufacturing, and education. In education, AI-based assessments undergird intelligent tutoring systems as well as many tools used to evaluate students and, in turn, guide learning and instruction. This chapter…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Dragos-Georgian Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Modeling reading comprehension processes is a critical task for Learning Analytics, as accurate models of the reading process can be used to match students to texts, identify appropriate interventions, and predict learning outcomes. This paper introduces an improved version of the Automated Model of Comprehension, namely version 4.0. AMoC has its…
Descriptors: Computer Software, Artificial Intelligence, Learning Analytics, Natural Language Processing
Laura K. Allen; Arthur C. Grasser; Danielle S. McNamara – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology
Dragos Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2023
Reading comprehension is essential for both knowledge acquisition and memory reinforcement. Automated modeling of the comprehension process provides insights into the efficacy of specific texts as learning tools. This paper introduces an improved version of the Automated Model of Comprehension, version 3.0 (AMoC v3.0). AMoC v3.0 is based on two…
Descriptors: Reading Comprehension, Models, Concept Mapping, Graphs
Muhsin Menekse – Grantee Submission, 2023
Generative artificial intelligence (AI) technologies, such as large language models (LLMs) and diffusion model image and video generators, can transform learning and teaching experiences by providing students and instructors with access to a vast amount of information and create innovative learning and teaching materials in a very efficient way…
Descriptors: Educational Trends, Engineering Education, Artificial Intelligence, Technology Uses in Education
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Maria Goldshtein; Jaclyn Ocumpaugh; Andrew Potter; Rod D. Roscoe – Grantee Submission, 2024
As language technologies have become more sophisticated and prevalent, there have been increasing concerns about bias in natural language processing (NLP). Such work often focuses on the effects of bias instead of sources. In contrast, this paper discusses how normative language assumptions and ideologies influence a range of automated language…
Descriptors: Language Attitudes, Computational Linguistics, Computer Software, Natural Language Processing

Priti Oli; Rabin Banjade; Jeevan Chapagain; Vasile Rus – Grantee Submission, 2023
This paper systematically explores how Large Language Models (LLMs) generate explanations of code examples of the type used in intro-to-programming courses. As we show, the nature of code explanations generated by LLMs varies considerably based on the wording of the prompt, the target code examples being explained, the programming language, the…
Descriptors: Computational Linguistics, Programming, Computer Science Education, Programming Languages