NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Tendeiro, Jorge N.; Meijer, Rob R. – Applied Psychological Measurement, 2013
To classify an item score pattern as not fitting a nonparametric item response theory (NIRT) model, the probability of exceedance (PE) of an observed response vector x can be determined as the sum of the probabilities of all response vectors that are, at most, as likely as x, conditional on the test's total score. Vector x is to be considered…
Descriptors: Probability, Nonparametric Statistics, Goodness of Fit, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Timothy R. – Applied Psychological Measurement, 2013
One of the distinctions between classical test theory and item response theory is that the former focuses on sum scores and their relationship to true scores, whereas the latter concerns item responses and their relationship to latent scores. Although item response theory is often viewed as the richer of the two theories, sum scores are still…
Descriptors: Item Response Theory, Scores, Computation, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Kreiner, Svend – Applied Psychological Measurement, 2011
To rule out the need for a two-parameter item response theory (IRT) model during item analysis by Rasch models, it is important to check the Rasch model's assumption that all items have the same item discrimination. Biserial and polyserial correlation coefficients measuring the association between items and restscores are often used in an informal…
Descriptors: Item Analysis, Correlation, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Henson, Robert; Roussos, Louis; Douglas, Jeff; He, Xuming – Applied Psychological Measurement, 2008
Cognitive diagnostic models (CDMs) model the probability of correctly answering an item as a function of an examinee's attribute mastery pattern. Because estimation of the mastery pattern involves more than a continuous measure of ability, reliability concepts introduced by classical test theory and item response theory do not apply. The cognitive…
Descriptors: Diagnostic Tests, Classification, Probability, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Finkelman, Matthew David – Applied Psychological Measurement, 2010
In sequential mastery testing (SMT), assessment via computer is used to classify examinees into one of two mutually exclusive categories. Unlike paper-and-pencil tests, SMT has the capability to use variable-length stopping rules. One approach to shortening variable-length tests is stochastic curtailment, which halts examination if the probability…
Descriptors: Mastery Tests, Computer Assisted Testing, Adaptive Testing, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Roberts, James S. – Applied Psychological Measurement, 2008
Orlando and Thissen (2000) developed an item fit statistic for binary item response theory (IRT) models known as S-X[superscript 2]. This article generalizes their statistic to polytomous unfolding models. Four alternative formulations of S-X[superscript 2] are developed for the generalized graded unfolding model (GGUM). The GGUM is a…
Descriptors: Item Response Theory, Goodness of Fit, Test Items, Models
Peer reviewed Peer reviewed
Direct linkDirect link
van Abswoude, Alexandra A. H.; Vermunt, Jeroen K.; Hemker, Bas T. – Applied Psychological Measurement, 2007
Mokken scale analysis can be used for scaling under nonparametric item response theory models. The results may, however, not reflect the underlying dimensionality of data. Various features of Mokken scale analysis--the H coefficient, Mokken scale conditions, and algorithms--may explain this result. In this article, three new H-based objective…
Descriptors: Measures (Individuals), Probability, Simulation, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M. – Applied Psychological Measurement, 2007
The validation of cognitive attributes required for correct answers on binary test items or tasks has been addressed in previous research through the integration of cognitive psychology and psychometric models using parametric or nonparametric item response theory, latent class modeling, and Bayesian modeling. All previous models, each with their…
Descriptors: Individual Testing, Test Items, Psychometrics, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Christensen, Karl Bang; Kreiner, Svend – Applied Psychological Measurement, 2007
Many statistical tests are designed to test the different assumptions of the Rasch model, but only few are directed at detecting multidimensionality. The Martin-Lof test is an attractive approach, the disadvantage being that its null distribution deviates strongly from the asymptotic chi-square distribution for most realistic sample sizes. A Monte…
Descriptors: Item Response Theory, Monte Carlo Methods, Testing, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Kreiner, Svend; Hansen, Mogens; Hansen, Carsten Rosenberg – Applied Psychological Measurement, 2006
Mixed Rasch models add latent classes to conventional Rasch models, assuming that the Rasch model applies within each class and that relative difficulties of items are different in two or more latent classes. This article considers a family of stochastically ordered mixed Rasch models, with ordinal latent classes characterized by increasing total…
Descriptors: Item Response Theory, Cognitive Tests, Problem Solving, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Eggen, Theo J. H. M.; Verschoor, Angela J. – Applied Psychological Measurement, 2006
Computerized adaptive tests (CATs) are individualized tests that, from a measurement point of view, are optimal for each individual, possibly under some practical conditions. In the present study, it is shown that maximum information item selection in CATs using an item bank that is calibrated with the one- or the two-parameter logistic model…
Descriptors: Adaptive Testing, Difficulty Level, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Martin, Ernesto San; del Pino, Guido; De Boeck, Paul – Applied Psychological Measurement, 2006
An ability-based guessing model is formulated and applied to several data sets regarding educational tests in language and in mathematics. The formulation of the model is such that the probability of a correct guess does not only depend on the item but also on the ability of the individual, weighted with a general discrimination parameter. By so…
Descriptors: Guessing (Tests), Probability, Mathematics Tests, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hendrawan, Irene; Glas, Cees A. W.; Meijer, Rob R. – Applied Psychological Measurement, 2005
The effect of person misfit to an item response theory model on a mastery/nonmastery decision was investigated. Furthermore, it was investigated whether the classification precision can be improved by identifying misfitting respondents using person-fit statistics. A simulation study was conducted to investigate the probability of a correct…
Descriptors: Probability, Statistics, Test Length, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Quenette, Mary A.; Nicewander, W. Alan; Thomasson, Gary L. – Applied Psychological Measurement, 2006
Model-based equating was compared to empirical equating of an Armed Services Vocational Aptitude Battery (ASVAB) test form. The model-based equating was done using item pretest data to derive item response theory (IRT) item parameter estimates for those items that were retained in the final version of the test. The analysis of an ASVAB test form…
Descriptors: Item Response Theory, Multiple Choice Tests, Test Items, Computation
Peer reviewed Peer reviewed
Andrich, David; Luo, Guanzhong – Applied Psychological Measurement, 1993
A unidimensional model for responses to statements that have an unfolding structure was constructed from the cumulative Rasch model for ordered response categories. A joint maximum likelihood estimation procedure was investigated. Analyses of data from a small simulation and a real data set show that the model is readily applicable. (SLD)
Descriptors: Attitude Measures, Data Collection, Equations (Mathematics), Item Response Theory
Previous Page | Next Page ยป
Pages: 1  |  2