NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Egberink, Iris J. L.; Meijer, Rob R.; Tendeiro, Jorge N. – Educational and Psychological Measurement, 2015
A popular method to assess measurement invariance of a particular item is based on likelihood ratio tests with all other items as anchor items. The results of this method are often only reported in terms of statistical significance, and researchers proposed different methods to empirically select anchor items. It is unclear, however, how many…
Descriptors: Personality Measures, Computer Assisted Testing, Measurement, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Tendeiro, Jorge N.; Meijer, Rob R. – Journal of Educational Measurement, 2014
In recent guidelines for fair educational testing it is advised to check the validity of individual test scores through the use of person-fit statistics. For practitioners it is unclear on the basis of the existing literature which statistic to use. An overview of relatively simple existing nonparametric approaches to identify atypical response…
Descriptors: Educational Assessment, Test Validity, Scores, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Meijer, Rob R.; Egberink, Iris J. L. – Educational and Psychological Measurement, 2012
In recent studies, different methods were proposed to investigate invariant item ordering (IIO), but practical IIO research is an unexploited field in questionnaire construction and evaluation. In the present study, the authors explored the usefulness of different IIO methods to analyze personality scales and clinical scales. From the authors'…
Descriptors: Test Items, Personality Measures, Questionnaires, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Tendeiro, Jorge N.; Meijer, Rob R. – Applied Psychological Measurement, 2013
To classify an item score pattern as not fitting a nonparametric item response theory (NIRT) model, the probability of exceedance (PE) of an observed response vector x can be determined as the sum of the probabilities of all response vectors that are, at most, as likely as x, conditional on the test's total score. Vector x is to be considered…
Descriptors: Probability, Nonparametric Statistics, Goodness of Fit, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Smits, Iris A. M.; Timmerman, Marieke E.; Meijer, Rob R. – Applied Psychological Measurement, 2012
The assessment of the number of dimensions and the dimensionality structure of questionnaire data is important in scale evaluation. In this study, the authors evaluate two dimensionality assessment procedures in the context of Mokken scale analysis (MSA), using a so-called fixed lowerbound. The comparative simulation study, covering various…
Descriptors: Simulation, Measures (Individuals), Program Effectiveness, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Egberink, Iris J. L.; Meijer, Rob R. – Assessment, 2011
The authors investigated the psychometric properties of the subscales of the Self-Perception Profile for Children with item response theory (IRT) models using a sample of 611 children. Results from a nonparametric Mokken analysis and a parametric IRT approach for boys (n = 268) and girls (n = 343) were compared. The authors found that most scales…
Descriptors: Profiles, Psychometrics, Item Response Theory, Self Concept
Peer reviewed Peer reviewed
Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas – Applied Psychological Measurement, 2002
Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…
Descriptors: Item Response Theory, Sampling, Simulation, Statistical Distributions
Peer reviewed Peer reviewed
Direct linkDirect link
Reise, Steven P.; Meijer, Rob R.; Ainsworth, Andrew T.; Morales, Leo S.; Hays, Ron D. – Multivariate Behavioral Research, 2006
Group-level parametric and non-parametric item response theory models were applied to the Consumer Assessment of Healthcare Providers and Systems (CAHPS[R]) 2.0 core items in a sample of 35,572 Medicaid recipients nested within 131 health plans. Results indicated that CAHPS responses are dominated by within health plan variation, and only weakly…
Descriptors: Item Response Theory, Psychometrics, Sample Size, Medical Care Evaluation
Peer reviewed Peer reviewed
Meijer, Rob R.; And Others – Applied Psychological Measurement, 1990
Mokken models of monotone homogeneity and double monotonicity and the Rasch model are compared using data from 990 young adult examinees taking a Dutch verbal intelligence test--the Verbal Analogies Test. The model of monotone homogeneity was found suitable for basic testing; more sophisticated applications appear to require parametric models.…
Descriptors: Comparative Analysis, Dutch, Foreign Countries, Goodness of Fit
Glas, Cees A. W.; Meijer, Rob R. – 2001
A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…
Descriptors: Bayesian Statistics, Item Response Theory, Markov Processes, Models
Peer reviewed Peer reviewed
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R. – Applied Psychological Measurement, 2002
Compared the nominal and empirical null distributions of the standardized log-likelihood statistic for polytomous items for paper-and-pencil (P&P) and computerized adaptive tests (CATs). Results show that the empirical distribution of the statistic differed from the assumed standard normal distribution for both P&P tests and CATs. Also…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Statistical Distributions
Hendrawan, Irene; Glas, Cees A. W.; Meijer, Rob R. – 2001
The effect of person misfit to an item response theory (IRT) model on a mastery/nonmastery decision was investigated. Also investigated was whether the classification precision can be improved by identifying misfitting respondents using person-fit statistics. A simulation study was conducted to investigate the probability of a correct…
Descriptors: Classification, Decision Making, Estimation (Mathematics), Goodness of Fit
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R. – 2000
Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. For computerized adaptive tests (CAT) with dichotomous items, several person-fit statistics for detecting nonfitting item score patterns have been proposed. Both for paper-and-pencil (P&P) test and CATs, detection of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Response Theory
Peer reviewed Peer reviewed
Meijer, Rob R. – Journal of Educational Measurement, 2002
Used empirical data from a certification test to study methods from statistical process control that have been proposed to classify an item score pattern as fitting or misfitting the underlying item response theory model in computerized adaptive testing. Results for 1,392 examinees show that different types of misfit can be distinguished. (SLD)
Descriptors: Certification, Classification, Goodness of Fit, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Emons, Wilco H. M.; Sijtsma, Klaas; Meijer, Rob R. – Multivariate Behavioral Research, 2004
The person-response function (PRF) relates the probability of an individual's correct answer to the difficulty of items measuring the same latent trait. Local deviations of the observed PRF from the expected PRF indicate person misfit. We discuss two new approaches to investigate person fit. The first approach uses kernel smoothing to estimate…
Descriptors: Probability, Simulation, Item Response Theory, Test Items
Previous Page | Next Page ยป
Pages: 1  |  2