NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
PDF on ERIC Download full text
ERIC Number: ED665490
Record Type: Non-Journal
Publication Date: 2024
Pages: 8
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: 0000-00-00
Large Language Model Detuning in Learning Content Understanding
Tsubasa Minematsu; Atsushi Shimada
International Association for Development of the Information Society, Paper presented at the International Association for Development of the Information Society (IADIS) International Conference on Cognition and Exploratory Learning in the Digital Age (CELDA) (21st, Zagreb, Croatia, Oct 26-28, 2024)
In using large language models (LLMs) for education, such as distractors in multiple-choice questions and learning by teaching, error-containing content is used. Prompt tuning and retraining LLMs are possible ways of having LLMs generate error-containing sentences in the learning content. However, there needs to be more discussion on how to tune LLMs for specific lecture content. Such discussions help control LLMs and for developing educational applications. In this study, we aim to train a detuned LLM that only states incorrect things, considering the limitations of prompt-based approaches such as prompt injection. Our method detunes LLMs by generating datasets that confuse LLMs. To evaluate our method, we asked the detuned LLM to solve multiple-choice questions to evaluate whether it answered the questions incorrectly or not. We also evaluate how many errors are contained in the sentences generated by the LLM to investigate how their knowledge of lecture content is degraded regarding factuality. [For the full proceedings, see ED665357.]
International Association for the Development of the Information Society. e-mail: secretariat@iadis.org; Web site: http://www.iadisportal.org
Publication Type: Speeches/Meeting Papers; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: N/A