Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 2 |
Descriptor
Artificial Intelligence | 2 |
Automation | 2 |
Accuracy | 1 |
Attention | 1 |
Computation | 1 |
Computer Assisted Testing | 1 |
Difficulty Level | 1 |
Documentation | 1 |
Inferences | 1 |
Models | 1 |
Multiple Choice Tests | 1 |
More ▼ |
Source
Grantee Submission | 2 |
Author
Andreea Dutulescu | 2 |
Danielle S. McNamara | 2 |
Mihai Dascalu | 2 |
Stefan Ruseti | 2 |
Denis Iorga | 1 |
Publication Type
Reports - Research | 2 |
Speeches/Meeting Papers | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing

Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension