NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Moses, Tim – ETS Research Report Series, 2016
The purpose of this study is to evaluate the extent to which item response theory (IRT) proficiency estimation methods are robust to the presence of aberrant responses under the "GRE"® General Test multistage adaptive testing (MST) design. To that end, a wide range of atypical response behaviors affecting as much as 10% of the test items…
Descriptors: Item Response Theory, Computation, Robustness (Statistics), Response Style (Tests)
Kim, Seock-Ho; Cohen, Allan S. – 2000
The ability estimates of Gibbs sampling and the magnitudes of the posterior standard deviations were investigated. Item parameters of the Q-E intelligence test (J. Fraenkel and N. Wallen, 2000) for 44 examinees were obtained using Gibbs sampling, marginal Bayesian estimation, and BILOG. Two normal priors were used in item parameter estimation.…
Descriptors: Ability, Bayesian Statistics, Estimation (Mathematics), Intelligence Tests
Cantrell, Catherine E. – 1997
This paper discusses the limitations of Classical Test Theory, the purpose of Item Response Theory/Latent Trait Measurement models, and the step-by-step calculations in the Rasch measurement model. The paper explains how Item Response Theory (IRT) transforms person abilities and item difficulties into the same metric for test-independent and…
Descriptors: Ability, Difficulty Level, Estimation (Mathematics), Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Jinming – ETS Research Report Series, 2005
Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…
Descriptors: Statistical Bias, Maximum Likelihood Statistics, Computation, Ability
Kim, Seock-Ho – 1998
The accuracy of the Markov chain Monte Carlo procedure, Gibbs sampling, was considered for estimation of item and ability parameters of the one-parameter logistic model. Four data sets were analyzed to evaluate the Gibbs sampling procedure. Data sets were also analyzed using methods of conditional maximum likelihood, marginal maximum likelihood,…
Descriptors: Ability, Estimation (Mathematics), Item Response Theory, Markov Processes
Wang, Xiang-Bo; Harris, Vincent; Roussos, Louis – 2002
Multidimensionality is known to affect the accuracy of item parameter and ability estimations, which subsequently influences the computation of item characteristic curves (ICCs) and true scores. By judiciously combining sections of a Law School Admission Test (LSAT), 11 sections of varying degrees of uni- and multidimensional structures are used…
Descriptors: Ability, College Entrance Examinations, Computer Assisted Testing, Estimation (Mathematics)
Lee, Yong-Won; Kantor, Robert; Mollaun, Pam – 2002
This study examines the score dependability of writing and speaking assessments from the Test of English as a Foreign Language (TOEFL) from the perspectives of univariate and multivariate generalizability theory (G-theory) and presents the findings of three separate G-theory studies. For writing, the focus was on evaluating the impact on…
Descriptors: Ability, English (Second Language), Generalizability Theory, Item Bias
Hedges, Larry V.; Vevea, Jack L. – 1997
This study investigates the amount of uncertainty added to National Assessment of Educational Progress (NAEP) estimates by equating error under both ideal and less than ideal circumstances. Data from past administrations are used to guide simulations of various equating designs and error due to equating is estimated empirically. The design…
Descriptors: Ability, Elementary Secondary Education, Equated Scores, Error of Measurement