NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jacobs, Cassandra L.; Cho, Sun-Joo; Watson, Duane G. – Cognitive Science, 2019
Syntactic priming in language production is the increased likelihood of using a recently encountered syntactic structure. In this paper, we examine two theories of why speakers can be primed: error-driven learning accounts (Bock, Dell, Chang, & Onishi, 2007; Chang, Dell, & Bock, 2006) and activation-based accounts (Pickering &…
Descriptors: Priming, Syntax, Prediction, Linguistic Theory
Suh, Youngsuk; Cho, Sun-Joo; Bottge, Brian A. – Grantee Submission, 2018
This article presents a multilevel longitudinal nested logit model for analyzing correct response and error types in multilevel longitudinal intervention data collected under a pretest-posttest, cluster randomized trial design. The use of the model is illustrated with a real data analysis, including a model comparison study regarding model…
Descriptors: Hierarchical Linear Modeling, Longitudinal Studies, Error Patterns, Change
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, In-Hee; Paek, Insu; Cho, Sun-Joo – Journal of Experimental Education, 2017
The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…
Descriptors: Item Response Theory, Models, Bayesian Statistics, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Wooyeol; Cho, Sun-Joo – Applied Measurement in Education, 2017
Utilizing a longitudinal item response model, this study investigated the effect of item parameter drift (IPD) on item parameters and person scores via a Monte Carlo study. Item parameter recovery was investigated for various IPD patterns in terms of bias and root mean-square error (RMSE), and percentage of time the 95% confidence interval covered…
Descriptors: Item Response Theory, Test Items, Bias, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Woo-yeol; Cho, Sun-Joo – Journal of Educational Measurement, 2017
Cross-level invariance in a multilevel item response model can be investigated by testing whether the within-level item discriminations are equal to the between-level item discriminations. Testing the cross-level invariance assumption is important to understand constructs in multilevel data. However, in most multilevel item response model…
Descriptors: Test Items, Item Response Theory, Item Analysis, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Sun-Joo; Preacher, Kristopher J. – Educational and Psychological Measurement, 2016
Multilevel modeling (MLM) is frequently used to detect cluster-level group differences in cluster randomized trial and observational studies. Group differences on the outcomes (posttest scores) are detected by controlling for the covariate (pretest scores) as a proxy variable for unobserved factors that predict future attributes. The pretest and…
Descriptors: Error of Measurement, Error Correction, Multivariate Analysis, Hierarchical Linear Modeling
Cho, Sun-Joo; Bottge, Brian A. – Grantee Submission, 2015
In a pretest-posttest cluster-randomized trial, one of the methods commonly used to detect an intervention effect involves controlling pre-test scores and other related covariates while estimating an intervention effect at post-test. In many applications in education, the total post-test and pre-test scores that ignores measurement error in the…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Pretests Posttests, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Goodwin, Amanda P.; Gilbert, Jennifer K.; Cho, Sun-Joo; Kearns, Devin M. – Journal of Educational Psychology, 2014
The current study models reader, item, and word contributions to the lexical representations of 39 morphologically complex words for 172 middle school students using a crossed random-effects item response model with multiple outcomes. We report 3 findings. First, results suggest that lexical representations can be characterized by separate but…
Descriptors: Lexicology, Morphology (Languages), Reading Comprehension, Reader Text Relationship
Peer reviewed Peer reviewed
Direct linkDirect link
Suh, Youngsuk; Cho, Sun-Joo; Wollack, James A. – Journal of Educational Measurement, 2012
In the presence of test speededness, the parameter estimates of item response theory models can be poorly estimated due to conditional dependencies among items, particularly for end-of-test items (i.e., speeded items). This article conducted a systematic comparison of five-item calibration procedures--a two-parameter logistic (2PL) model, a…
Descriptors: Response Style (Tests), Timed Tests, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Goodwin, Amanda P.; Gilbert, Jennifer K.; Cho, Sun-Joo – Reading Research Quarterly, 2013
The current study uses a crossed random-effects item response model to simultaneously examine both reader and word characteristics and interactions between them that predict the reading of 39 morphologically complex words for 221 middle school students. Results suggest that a reader's ability to read a root word (e.g., "isolate") predicts that…
Descriptors: Item Response Theory, Morphemes, Semantics, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Sun-Joo; Cohen, Allan S. – Journal of Educational and Behavioral Statistics, 2010
Mixture item response theory models have been suggested as a potentially useful methodology for identifying latent groups formed along secondary, possibly nuisance dimensions. In this article, we describe a multilevel mixture item response theory (IRT) model (MMixIRTM) that allows for the possibility that this nuisance dimensionality may function…
Descriptors: Simulation, Mathematics Tests, Item Response Theory, Student Behavior
Cho, Sun-Joo; Cohen, Allan S.; Bottge, Brian – Grantee Submission, 2013
A multilevel latent transition analysis (LTA) with a mixture IRT measurement model (MixIRTM) is described for investigating the effectiveness of an intervention. The addition of a MixIRTM to the multilevel LTA permits consideration of both potential heterogeneity in students' response to instructional intervention as well as a methodology for…
Descriptors: Intervention, Item Response Theory, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian – Applied Psychological Measurement, 2010
A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…
Descriptors: Item Response Theory, Measurement, Models, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo – Applied Psychological Measurement, 2009
This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…
Descriptors: Item Response Theory, Models, Selection, Methods