
ERIC Number: ED655802
Record Type: Non-Journal
Publication Date: 2024-Feb-26
Pages: 14
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Automated Assessment of Students' Code Comprehension Using LLMs
Grantee Submission, Paper presented at the Workshop on AI for Education - Bridging Innovation and Responsibility at AAAI (Vancouver, Canada, Feb 26-27, 2024)
Assessing students' answers and in particular natural language answers is a crucial challenge in the field of education. Advances in transformer-based models such as Large Language Models (LLMs), have led to significant progress in various natural language tasks. Nevertheless, amidst the growing trend of evaluating LLMs across diverse tasks, evaluating LLMs in the realm of automated answer assessment has not received much attention. To address this gap, we explore the potential of using LLMs for automated assessment of student's short and open-ended answers in program comprehension tasks. Particularly, we use LLMs to compare students' explanations with expert explanations in the context of line-by-line explanations of computer programs. For comparison purposes, we assess both decoder-only Large Language Models (LLMs) and encoder-based Semantic Textual Similarity (STS) models in the context of assessing the correctness of students' explanation of computer code. Our findings indicate that decoder-only LLMs, when prompted in few-shot and chain-of-thought setting perform comparable to fine-tuned encoder-based models in evaluating students' short answers in the programming domain. [[This paper was published in: "Proceedings of Machine Learning Research" (2024).]
Publication Type: Speeches/Meeting Papers; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: Institute of Education Sciences (ED); National Science Foundation (NSF)
Authoring Institution: N/A
IES Funded: Yes
Grant or Contract Numbers: R305A220385; 1934745; 1822816