ERIC Number: ED655931
Record Type: Non-Journal
Publication Date: 2023
Pages: 10
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Rewriting Math Word Problems with Large Language Models
Kole Norberg; Husni Almoubayyed; Stephen E. Fancsali; Logan De Ley; Kyle Weldon; April Murphy; Steve Ritter
Grantee Submission, Paper presented at the AIEd23: Artificial Intelligence in Education, Empowering Education with LLMs Workshop (Tokyo, Japan, Jul 07, 2023)
Large Language Models have recently achieved high performance on many writing tasks. In a recent study, math word problems in Carnegie Learning's MATHia adaptive learning software were rewritten by human authors to improve their clarity and specificity. The randomized experiment found that emerging readers who received the rewritten word problems spent less time completing the problems and also achieved higher mastery compared to emerging readers who received the original content. We used GPT-4 to rewrite the same set of math word problems, prompting it to follow the same guidelines that the human authors followed. We lay out our prompt engineering process, comparing several prompting strategies: zero-shot, few-shot, and chain-of-thought prompting. Additionally, we overview how we leveraged GPT's ability to write python code in order to encode mathematical components of word problems. We report text analysis of the original, human-rewritten, and GPT-rewritten problems. GPT rewrites had the most optimal readability, lexical diversity, and cohesion scores but used more low frequency words. We present our plan to test the GPT outputs in upcoming randomized field trials in MATHia.
Publication Type: Speeches/Meeting Papers; Reports - Evaluative
Education Level: N/A
Audience: N/A
Language: English
Sponsor: Institute of Education Sciences (ED)
Authoring Institution: N/A
IES Funded: Yes
Grant or Contract Numbers: R324A210289
Author Affiliations: N/A