NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Tendeiro, Jorge N.; Meijer, Rob R. – Journal of Educational Measurement, 2014
In recent guidelines for fair educational testing it is advised to check the validity of individual test scores through the use of person-fit statistics. For practitioners it is unclear on the basis of the existing literature which statistic to use. An overview of relatively simple existing nonparametric approaches to identify atypical response…
Descriptors: Educational Assessment, Test Validity, Scores, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Smits, Iris A. M.; Timmerman, Marieke E.; Meijer, Rob R. – Applied Psychological Measurement, 2012
The assessment of the number of dimensions and the dimensionality structure of questionnaire data is important in scale evaluation. In this study, the authors evaluate two dimensionality assessment procedures in the context of Mokken scale analysis (MSA), using a so-called fixed lowerbound. The comparative simulation study, covering various…
Descriptors: Simulation, Measures (Individuals), Program Effectiveness, Item Response Theory
Peer reviewed Peer reviewed
Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas – Applied Psychological Measurement, 2002
Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…
Descriptors: Item Response Theory, Sampling, Simulation, Statistical Distributions
Peer reviewed Peer reviewed
Sijtsma, Klaas; Meijer, Rob R. – Psychometrika, 2001
Studied the use of the person response function (PRF) for identifying nonfitting item score patterns. Proposed a person-fit method reformulated in a nonparametric item response theory (IRT) context. Conducted a simulation study to compare the use of the PRF with a person-fit statistic, resulting in the conclusion that the PRF can be used as a…
Descriptors: Item Response Theory, Monte Carlo Methods, Nonparametric Statistics, Scores
Glas, Cees A. W.; Meijer, Rob R. – 2001
A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…
Descriptors: Bayesian Statistics, Item Response Theory, Markov Processes, Models
Peer reviewed Peer reviewed
Meijer, Rob R. – Applied Psychological Measurement, 1994
Through simulation, the power of the U3 statistic was compared with the power of one of the simplest person-fit statistics, the sum of the number of Guttman errors. In most cases, a weighted version of the latter statistic performed as well as the U3 statistic. (SLD)
Descriptors: Error Patterns, Item Response Theory, Nonparametric Statistics, Power (Statistics)
Hendrawan, Irene; Glas, Cees A. W.; Meijer, Rob R. – 2001
The effect of person misfit to an item response theory (IRT) model on a mastery/nonmastery decision was investigated. Also investigated was whether the classification precision can be improved by identifying misfitting respondents using person-fit statistics. A simulation study was conducted to investigate the probability of a correct…
Descriptors: Classification, Decision Making, Estimation (Mathematics), Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Hendrawan, Irene; Glas, Cees A. W.; Meijer, Rob R. – Applied Psychological Measurement, 2005
The effect of person misfit to an item response theory model on a mastery/nonmastery decision was investigated. Furthermore, it was investigated whether the classification precision can be improved by identifying misfitting respondents using person-fit statistics. A simulation study was conducted to investigate the probability of a correct…
Descriptors: Probability, Statistics, Test Length, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Emons, Wilco H. M.; Sijtsma, Klaas; Meijer, Rob R. – Multivariate Behavioral Research, 2004
The person-response function (PRF) relates the probability of an individual's correct answer to the difficulty of items measuring the same latent trait. Local deviations of the observed PRF from the expected PRF indicate person misfit. We discuss two new approaches to investigate person fit. The first approach uses kernel smoothing to estimate…
Descriptors: Probability, Simulation, Item Response Theory, Test Items
Meijer, Rob R.; van Krimpen-Stoop, Edith M. L. A. – 1998
Several person-fit statistics have been proposed to detect item score patterns that do not fit an item response theory model. To classify response patterns as not fitting a model, a distribution of a person-fit statistic is needed. The null distributions of several fit statistics have been investigated using conventionally administered tests, but…
Descriptors: Ability, Adaptive Testing, Foreign Countries, Item Response Theory
Peer reviewed Peer reviewed
Meijer, Rob R.; And Others – Applied Psychological Measurement, 1994
The power of the nonparametric person-fit statistic, U3, is investigated through simulations as a function of item characteristics, test characteristics, person characteristics, and the group to which examinees belong. Results suggest conditions under which relatively short tests can be used for person-fit analysis. (SLD)
Descriptors: Difficulty Level, Group Membership, Item Response Theory, Nonparametric Statistics
Peer reviewed Peer reviewed
Meijer, Rob R.; And Others – Applied Measurement in Education, 1996
Several existing group-based statistics to detect improbable item score patterns are discussed, along with the cut scores proposed in the literature to classify an item score pattern as aberrant. A simulation study and an empirical study are used to compare the statistics and their use and to investigate the practical use of cut scores. (SLD)
Descriptors: Achievement Tests, Classification, Cutting Scores, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Meijer, Rob R. – Journal of Educational Measurement, 2004
Two new methods have been proposed to determine unexpected sum scores on sub-tests (testlets) both for paper-and-pencil tests and computer adaptive tests. A method based on a conservative bound using the hypergeometric distribution, denoted p, was compared with a method where the probability for each score combination was calculated using a…
Descriptors: Probability, Adaptive Testing, Item Response Theory, Scores