Abstract:
Open-ended comprehension questions are a common type of assessment used to evaluate how well students understand one of multiple documents. Our aim is to use natural lang...Show MoreMetadata
Abstract:
Open-ended comprehension questions are a common type of assessment used to evaluate how well students understand one of multiple documents. Our aim is to use natural language processing (NLP) to infer the level and type of inferencing within readers' answers to comprehension questions using the linguistic and semantic features within their responses. Our taxonomy considers three types of responses to comprehension questions from students ( N=146) who read four documents: a) textbase responses (i.e., information required for the answer is present in a contiguous short sequence of text); b) single-document inference responses (i.e., requiring information from multiple text segments in a single document); and c) multi-document inference responses (i.e., information spanning multiple documents is required). The classification task was approached in two ways. First, we extracted features from students' answers to the comprehension questions using linguistic and semantic indices related to textual complexity and an extended Cohesion Network Analysis (CNA) graph to assess semantic links between the answers and the reference documents. Second, we compared different Recurrent Neural Networks (RNNs) architectures that rely on word embeddings to encode both answers and reference documents. Our best model based on RNNs predicts the answer type with an accuracy of 81%.
Date of Conference: 09-11 November 2020
Date Added to IEEE Xplore: 24 December 2020
ISBN Information: