Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 7 |
Descriptor
Artificial Intelligence | 7 |
Automation | 4 |
Natural Language Processing | 4 |
Accuracy | 3 |
Reading Comprehension | 3 |
Scores | 3 |
Classification | 2 |
Documentation | 2 |
Intelligent Tutoring Systems | 2 |
Models | 2 |
Reading Instruction | 2 |
More ▼ |
Source
Grantee Submission | 7 |
Author
Danielle S. McNamara | 7 |
Mihai Dascalu | 7 |
Renu Balyan | 4 |
Stefan Ruseti | 4 |
Amy M. Johnson | 2 |
Andreea Dutulescu | 2 |
Marilena Panaite | 2 |
Stefan Trausan-Matu | 2 |
Amy Johnson | 1 |
Bogdan Nicula | 1 |
Denis Iorga | 1 |
More ▼ |
Publication Type
Reports - Research | 7 |
Speeches/Meeting Papers | 7 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
California | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing

Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Robert-Mihai Botarleanu; Mihai Dascalu; Scott Andrew Crossley; Danielle S. McNamara – Grantee Submission, 2022
The ability to express yourself concisely and coherently is a crucial skill, both for academic purposes and professional careers. An important aspect to consider in writing is an adequate segmentation of ideas, which in turn requires a proper understanding of where to place paragraph breaks. However, these decisions are often performed…
Descriptors: Paragraph Composition, Text Structure, Automation, Identification
Bogdan Nicula; Marilena Panaite; Tracy Arner; Renu Balyan; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2023
Self-explanation practice is an effective method to support students in better understanding complex texts. This study focuses on automatically assessing the comprehension strategies employed by readers while understanding STEM texts. Data from 3 datasets (N = 11,833) with self-explanations annotated on different comprehension strategies (i.e.,…
Descriptors: Reading Strategies, Reading Comprehension, Metacognition, STEM Education
Stefan Ruseti; Mihai Dascalu; Amy M. Johnson; Danielle S. McNamara; Renu Balyan; Kathryn S. McCarthy; Stefan Trausan-Matu – Grantee Submission, 2018
Summarization enhances comprehension and is considered an effective strategy to promote and enhance learning and deep understanding of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation requires a lot of effort and time. Although the need for automated support is stringent, there are only a…
Descriptors: Documentation, Artificial Intelligence, Educational Technology, Writing (Composition)
Marilena Panaite; Mihai Dascalu; Amy Johnson; Renu Balyan; Jianmin Dai; Danielle S. McNamara; Stefan Trausan-Matu – Grantee Submission, 2018
Intelligent Tutoring Systems (ITSs) are aimed at promoting acquisition of knowledge and skills by providing relevant and appropriate feedback during students' practice activities. ITSs for literacy instruction commonly assess typed responses using Natural Language Processing (NLP) algorithms. One step in this direction often requires building a…
Descriptors: Intelligent Tutoring Systems, Artificial Intelligence, Algorithms, Decision Making
Stefan Ruseti; Mihai Dascalu; Amy M. Johnson; Renu Balyan; Kristopher J. Kopp; Danielle S. McNamara – Grantee Submission, 2018
This study assesses the extent to which machine learning techniques can be used to predict question quality. An algorithm based on textual complexity indices was previously developed to assess question quality to provide feedback on questions generated by students within iSTART (an intelligent tutoring system that teaches reading strategies). In…
Descriptors: Questioning Techniques, Artificial Intelligence, Networks, Classification