Publication Date
In 2025 | 0 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 6 |
Descriptor
Artificial Intelligence | 6 |
Automation | 4 |
Natural Language Processing | 4 |
Prediction | 3 |
Reading Comprehension | 3 |
Accuracy | 2 |
Computer Assisted Testing | 2 |
Documentation | 2 |
Models | 2 |
Adult Literacy | 1 |
Algorithms | 1 |
More ▼ |
Source
Grantee Submission | 6 |
Author
Danielle S. McNamara | 6 |
Mihai Dascalu | 6 |
Stefan Ruseti | 6 |
Amy M. Johnson | 2 |
Andreea Dutulescu | 2 |
Renu Balyan | 2 |
Denis Iorga | 1 |
Dragos-Georgian Corlatescu | 1 |
Ionut Paraschiv | 1 |
Kathryn S. McCarthy | 1 |
Kristopher J. Kopp | 1 |
More ▼ |
Publication Type
Reports - Research | 6 |
Speeches/Meeting Papers | 4 |
Journal Articles | 1 |
Education Level
Audience
Location
California | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing

Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Dragos-Georgian Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Modeling reading comprehension processes is a critical task for Learning Analytics, as accurate models of the reading process can be used to match students to texts, identify appropriate interventions, and predict learning outcomes. This paper introduces an improved version of the Automated Model of Comprehension, namely version 4.0. AMoC has its…
Descriptors: Computer Software, Artificial Intelligence, Learning Analytics, Natural Language Processing
Stefan Ruseti; Mihai Dascalu; Amy M. Johnson; Danielle S. McNamara; Renu Balyan; Kathryn S. McCarthy; Stefan Trausan-Matu – Grantee Submission, 2018
Summarization enhances comprehension and is considered an effective strategy to promote and enhance learning and deep understanding of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation requires a lot of effort and time. Although the need for automated support is stringent, there are only a…
Descriptors: Documentation, Artificial Intelligence, Educational Technology, Writing (Composition)
Stefan Ruseti; Mihai Dascalu; Amy M. Johnson; Renu Balyan; Kristopher J. Kopp; Danielle S. McNamara – Grantee Submission, 2018
This study assesses the extent to which machine learning techniques can be used to predict question quality. An algorithm based on textual complexity indices was previously developed to assess question quality to provide feedback on questions generated by students within iSTART (an intelligent tutoring system that teaches reading strategies). In…
Descriptors: Questioning Techniques, Artificial Intelligence, Networks, Classification