
ERIC Number: ED650847
Record Type: Non-Journal
Publication Date: 2024
Pages: 8
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
How Hard Can This Question Be? An Exploratory Analysis of Features Assessing Question Difficulty Using LLMs
Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara
Grantee Submission, Paper presented at Educational Data Mining, Atlanta, GA, Jul 14-17, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This study tackles the task of automated question difficulty assessment, exploring the potential of leveraging Large Language Models (LLMs) to enhance the comprehension of the context and interconnections required to address a question. Our method incorporates multiple LLM-based difficulty measures and compares their performance on the FairytaleQA educational dataset with the human-annotated difficulty labels. Besides comparing different computational methods, this study also bridges the gap between machine and human understanding of question difficulty by analyzing the correlation between LLM-based measures and human perceptions. Our results provide valuable insights into the capabilities of LLMs in educational settings, particularly in the context of reading comprehension.
Publication Type: Speeches/Meeting Papers; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: National Center for Education Research (NCER) (ED/IES)
Authoring Institution: N/A
IES Funded: Yes
Grant or Contract Numbers: R305T240035