ERIC Number: ED645593
Record Type: Non-Journal
Publication Date: 2022
Pages: 151
Abstractor: As Provided
ISBN: 979-8-8375-2299-4
ISSN: N/A
EISSN: N/A
Comparison of Quantile Regression Models on SGP Estimation and the Effect on Classification Consistency and Accuracy
Adam J. Reeger
ProQuest LLC, Ph.D. Dissertation, The University of Iowa
Student growth percentiles (SGPs) have become a common means to measure and report on student academic growth for state education accountability, and some states have adopted SGP cutscores as a means of classifying student growth into categories like "high/medium/low" growth. It has therefore become important to understand properties of these growth classifications, particularly the degree to which examinees can be classified into these growth categories consistently and accurately relative to an SGP cutscore. This simulation study investigated the classification consistency and accuracy of SGPs through the lens of an IRT-based classification framework generalized by Lee (2010). A focal point of this study was to investigate the potential impacts of accounting for higher-level effects in the SGP estimation process and their subsequent classifications. Accounting for hierarchical data structures, such as students nested in classrooms, is important in regression models, as ignoring such sources of variation can lead to misspecified regression weights and wrong conclusions being drawn about model effects. Thus, classification indices based on SGP estimates from two distinct linear quantile regression models were compared. One model was the fixed-effects quantile model often used to estimate SGPs, while the second model was a quantile mixed-effects model that incorporated a random effect for classroom membership. Additionally, factors such as IRT ability estimation method, level of classroom heterogeneity, and number of classification cutscores were explored in terms of their impact on classification consistency and accuracy of SGPs into growth categories. Results showed that estimates from a quantile mixed-effects model produced slightly higher classification measures than estimates from a quantile fixed-effects model only when the mean differences in ability across classrooms were large. Also, classification consistency and accuracy decreased as the number of cutscores increased, but the classification patterns among the other factors became less clear when the number of cutscores was greater than two. Finally, there seemed to be some benefit to classification accuracy by accounting for a classroom random effect in the ability estimation process. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://bibliotheek.ehb.be:2222/en-US/products/dissertations/individuals.shtml.]
Descriptors: Academic Achievement, Achievement Gains, Accountability, Regression (Statistics), Models, Classification, Accuracy, Computation, Cutting Scores
ProQuest LLC. 789 East Eisenhower Parkway, P.O. Box 1346, Ann Arbor, MI 48106. Tel: 800-521-0600; Web site: http://bibliotheek.ehb.be:2222/en-US/products/dissertations/individuals.shtml
Publication Type: Dissertations/Theses - Doctoral Dissertations
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A