
ERIC Number: ED638958
Record Type: Non-Journal
Publication Date: 2023
Pages: 19
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
The Behavior of Large Language Models When Prompted to Generate Code Explanations
Priti Oli; Rabin Banjade; Jeevan Chapagain; Vasile Rus
Grantee Submission, Paper presented at the Conference on Neural Information Processing Systems (NeurIPS 2023) (37th, New Orleans, LA, Dec 2023)
This paper systematically explores how Large Language Models (LLMs) generate explanations of code examples of the type used in intro-to-programming courses. As we show, the nature of code explanations generated by LLMs varies considerably based on the wording of the prompt, the target code examples being explained, the programming language, the temperature parameter, and the version of the LLM. Nevertheless, they are consistent in two major respects for Java and Python: the readability level, which hovers around 7-8 grade, and lexical density, i.e., the relative size of the meaninful words with respect to the total explanation size. Furthermore, the explanations score very high in correctness but less on three other metrics: completeness, conciseness, and contextualization. [This paper is in: Proceedings of the workshop on Generative AI for Education(GAIED): Advances, Opportunities, and Challenges, 2003.]
Publication Type: Speeches/Meeting Papers; Reports - Research
Education Level: Elementary Education; Grade 7; Junior High Schools; Middle Schools; Secondary Education; Grade 8
Audience: N/A
Language: English
Sponsor: National Science Foundation (NSF); Institute of Education Sciences (ED)
Authoring Institution: N/A
IES Funded: Yes
Grant or Contract Numbers: 1822816; R305A220385