Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 21 |
Since 2016 (last 10 years) | 26 |
Since 2006 (last 20 years) | 30 |
Descriptor
Source
Grantee Submission | 17 |
International Educational… | 10 |
International Association for… | 3 |
Proceedings of the ASIS… | 2 |
Applied Linguistics | 1 |
Language Learning | 1 |
Language and Speech | 1 |
Author
Publication Type
Speeches/Meeting Papers | 80 |
Reports - Research | 39 |
Information Analyses | 14 |
Opinion Papers | 13 |
Reports - Evaluative | 9 |
Reports - Descriptive | 7 |
Journal Articles | 5 |
Guides - Classroom - Teacher | 2 |
Guides - Non-Classroom | 1 |
Education Level
Higher Education | 7 |
Postsecondary Education | 7 |
Secondary Education | 3 |
High Schools | 2 |
Elementary Education | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Practitioners | 4 |
Teachers | 2 |
Administrators | 1 |
Researchers | 1 |
Location
Brazil | 1 |
California (Stanford) | 1 |
Illinois (Chicago) | 1 |
Italy | 1 |
North Carolina | 1 |
Pennsylvania | 1 |
South Korea | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Flesch Reading Ease Formula | 1 |
Gates MacGinitie Reading Tests | 1 |
National Assessment of… | 1 |
Peabody Picture Vocabulary… | 1 |
Test of English for… | 1 |
What Works Clearinghouse Rating
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Olney, Andrew M. – Grantee Submission, 2022
Multi-angle question answering models have recently been proposed that promise to perform related tasks like question generation. However, performance on related tasks has not been thoroughly studied. We investigate a leading model called Macaw on the task of multiple choice question generation and evaluate its performance on three angles that…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Models
Shimmei, Machi; Matsuda, Noboru – International Educational Data Mining Society, 2023
We propose an innovative, effective, and data-agnostic method to train a deep-neural network model with an extremely small training dataset, called VELR (Voting-based Ensemble Learning with Rejection). In educational research and practice, providing valid labels for a sufficient amount of data to be used for supervised learning can be very costly…
Descriptors: Artificial Intelligence, Training, Natural Language Processing, Educational Research
Rashid, M. Parvez; Xiao, Yunkai; Gehringer, Edward F. – International Educational Data Mining Society, 2022
Peer assessment can be a more effective pedagogical method when reviewers provide quality feedback. But what makes feedback helpful to reviewees? Other studies have identified quality feedback as focusing on detecting problems, providing suggestions, or pointing out where changes need to be made. However, it is important to seek students'…
Descriptors: Peer Evaluation, Feedback (Response), Natural Language Processing, Artificial Intelligence
Dragos Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2023
Reading comprehension is essential for both knowledge acquisition and memory reinforcement. Automated modeling of the comprehension process provides insights into the efficacy of specific texts as learning tools. This paper introduces an improved version of the Automated Model of Comprehension, version 3.0 (AMoC v3.0). AMoC v3.0 is based on two…
Descriptors: Reading Comprehension, Models, Concept Mapping, Graphs
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Condor, Aubrey; Litster, Max; Pardos, Zachary – International Educational Data Mining Society, 2021
We explore how different components of an Automatic Short Answer Grading (ASAG) model affect the model's ability to generalize to questions outside of those used for training. For supervised automatic grading models, human ratings are primarily used as ground truth labels. Producing such ratings can be resource heavy, as subject matter experts…
Descriptors: Automation, Grading, Test Items, Generalization
Andrew M. Olney – Grantee Submission, 2023
Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Algorithms
Botarleanu, Robert-Mihai; Dascalu, Mihai; Watanabe, Micah; McNamara, Danielle S.; Crossley, Scott Andrew – Grantee Submission, 2021
The ability to objectively quantify the complexity of a text can be a useful indicator of how likely learners of a given level will comprehend it. Before creating more complex models of assessing text difficulty, the basic building block of a text consists of words and, inherently, its overall difficulty is greatly influenced by the complexity of…
Descriptors: Multilingualism, Language Acquisition, Age, Models
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2021
Text summarization is an effective reading comprehension strategy. However, summary evaluation is complex and must account for various factors including the summary and the reference text. This study examines a corpus of approximately 3,000 summaries based on 87 reference texts, with each summary being manually scored on a 4-point Likert scale.…
Descriptors: Computer Assisted Testing, Scoring, Natural Language Processing, Computer Software
Jia, Qinjin; Cui, Jialin; Xiao, Yunkai; Liu, Chengyuan; Rashid, Parvez; Gehringer, Edward – International Educational Data Mining Society, 2021
Peer assessment has been widely applied across diverse academic fields over the last few decades, and has demonstrated its effectiveness. However, the advantages of peer assessment can only be achieved with high-quality peer reviews. Previous studies have found that high-quality review comments usually comprise several features (e.g., contain…
Descriptors: Peer Evaluation, Models, Artificial Intelligence, Evaluation Methods
Botarleanu, Robert-Mihai; Dascalu, Mihai; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2020
A key writing skill is the capability to clearly convey desired meaning using available linguistic knowledge. Consequently, writers must select from a large array of idioms, vocabulary terms that are semantically equivalent, and discourse features that simultaneously reflect content and allow readers to grasp meaning. In many cases, a simplified…
Descriptors: Natural Language Processing, Writing Skills, Difficulty Level, Reading Comprehension
Švábenský, Valdemar; Baker, Ryan S.; Zambrano, Andrés; Zou, Yishan; Slater, Stefan – International Educational Data Mining Society, 2023
Students who take an online course, such as a MOOC, use the course's discussion forum to ask questions or reach out to instructors when encountering an issue. However, reading and responding to students' questions is difficult to scale because of the time needed to consider each message. As a result, critical issues may be left unresolved, and…
Descriptors: Generalization, Computer Mediated Communication, MOOCs, State Universities
Silvia García-Méndez; Francisco de Arriba-Pérez; Francisco J. González-Castaño – International Association for Development of the Information Society, 2023
Mobile learning or mLearning has become an essential tool in many fields in this digital era, among the ones educational training deserves special attention, that is, applied to both basic and higher education towards active, flexible, effective high-quality and continuous learning. However, despite the advances in Natural Language Processing…
Descriptors: Higher Education, Artificial Intelligence, Computer Software, Usability