NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: ED663539
Record Type: Non-Journal
Publication Date: 2024-Sep-21
Pages: N/A
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Impact of Problem Type on Learning in Computer-Based Learning Platforms: Results from a Large-Scale Experiment
Kirk Vanacore; Ashish Gurung; Adam Sales; Neil Heffernan
Society for Research on Educational Effectiveness
Background: The proliferation of computer-based learning platforms (CBLPs) has caused an increased focus on understanding how to build scalable systems that optimize learning. Thus, CBLPs often rely on close response questions (e.g., multiple choice questions, "select all that apply," "arrange in the correct order," etc.) for learning tasks as they are easy to grade automatically. Furthermore, students' selection of distractors may be leveraged to provide automated feedback based on students' likely misconceptions. Yet, educators often believe that closed-response questions may lead to shallow learning (Wang et al., 2021). Alternatively, open-response questions are often considered more rigorous (Magliano et al., 2007). However, open-response questions are harder to grade automatically, and therefore, it is harder to provide timely and targeted feedback based on students' responses. Fill-in problems, in which students must provide a specific response in a text box, represent a middle ground for some content areas by requiring students to produce answers independently. Furthermore, responses to fill-in problems can be used to provide automated targeted feedback based on what the student submitted (Gurung, Baral, et al., 2023; Gurung, Lee, et al., 2023). Considering the mixed results from prior work on the learning and assessment value of closed and open-response questions (Funk & Dickson, 2011; Magliano et al., 2007; Sugrue et al., 1998), fill-in problems provide a potential alternative problem type that may combine the advantages of closed and open-response questions. Current Study: Using an experimental design, we evaluate the impact of the fill-in problems compared to multiple-choice questions (MCQs) on students' performance during a learning activity (RQ1) and its effect on their learning as measured by the post-test (RQ2). Furthermore, we must ensure that all students share any learning benefits of the features of CBLPs. Thus, following the guidance of Kizilcec & Lee (2022), we evaluate whether the feature does not widen the gap between advantaged and disadvantaged students based on students' performance before the activity (RQ3). Design: This study used mastery learning activities through ASSISTments (Heffernan & Heffernan, 2014), a math-focused CBLP. The experiment involved two middle school math mastery-based activities. Students were randomized to either receive Fill-in problems or MCQs throughout the activity. The problem order during the activity was also random. Figure 1 shows the randomization scheme, and Figure 2 presents the differences in the fill-in, MQC, and post-test. Notably, the post-test consisted of more complex transfer items. Students only took the post-test if they mastered the activity. Data: The data was collected across five school years in the United States (2017-18, 2018-19, 2019-20, 2020-21, 2021-22). Middle school teachers who used ASSISTments as an instructional tool assigned mastery learning activities to their students as part of their lessons. 192 teachers assigned the two problem sets to 383 classes. 6774 students participated in the experiment. Method: Our analysis required four steps. First, we ran a series of regression models with robust standard errors (one model for problem sequence) predicting students' accuracy (Y[subscript ij]) within the master learning activity by whether students were randomized to treatment (Fill-In[subscript i]) with fixed effects for the activities' problems (P[subscript j]) (Equation 1). This evaluates how problem type influences students' performance across the activity. Second, we assessed whether attrition differed across conditions. Third, we use a multi-level logistic regression to estimate whether problem type affects students' learning by regressing treatment assignments on students' accuracy on post-test items accounting for the student ([mu][subscript i]), class ([mu][subscript c]) and problem ([mu][subscript j]) (Equation 2). Finally, we added an interaction to Equation 2 to between Fill-In[subscript i] and students' prior performance in ASSISTments as a measure of their prior mathematical knowledge. Results: (RQ1) On the first problem sequence, students in the MCQ condition outperformed those in the Fill-In condition by an estimated five percentage points ([beta][subscript 1] = -0.05, SE = 0.012, p < 0.0001). Figure 2 displays the average performance (lines) and samples (shading) by condition across the first ten problems in the mastery learning component of the experiment. The pattern of performance differences across the activity suggests that the difficulty experienced early on by the students in the Fill-In condition potentially benefited the students later in the activity. Nevertheless, it is important to acknowledge a potential confound to this explanation as the samples differed across conditions on problem sequences due to differing mastery and attrition rates. (RQ2) Table 1 details the experiment's attrition rates and the balance test statistics. Students could attrit in multiple ways: not masting, not stating the post-test, and not completing the post-test. Overall, there weren't significant differences in attrition across conditions, but the balance between mastery rates was only marginally non-significant. Table 2 presents the model used to estimate the effect of problem type on post-test performance. Students who engaged with Fill-In problem sets were significantly more likely to provide correct responses in the post-test than those who worked through MCQs ([gamma]1 = 0.23, SE = 0.06, p > 0.001). Students in the MCQ condition had a 27% probability of getting either of the transfer problems correct. In contrast, students in the Fill-In condition had a 31% probability of getting the transfer item correct. As a robustness check, we ran this model, giving all students who attired incorrect responses, and the effect was still positive and significant. (RQ3) Table 3 displays the results of the interaction model. The interaction between the prior performance and the Fill-In problem set is significant and positive ([gamma]3 = 0.20, SE = 0.10, p = 0.042). Figure 4 plots the interaction, showing that while students with higher prior performance experienced a positive effect from fill-in problems, those with lower knowledge likely benefited from MQCs. Conclusion: Overall, this study's findings present causal evidence that problem types influence how and whether students learn. We observed that, on average, students had better learning outcomes when using mastery-based assignments with Fill-In problems compared to MCQs, but this effect may vary by students' prior knowledge. Notably, prior performance may not be a perfect measure of prior knowledge; thus, this analysis points to the need for further investigation into how problems are presented to ensure all students are learning.
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; e-mail: contact@sree.org; Web site: https://www.sree.org/
Publication Type: Reports - Research
Education Level: Junior High Schools; Middle Schools; Secondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Grant or Contract Numbers: N/A